IMAGE SEGMENTATION METHODS AND SYSTEMS
20230386044 · 2023-11-30
Inventors
Cpc classification
G06T7/187
PHYSICS
International classification
G06T7/187
PHYSICS
Abstract
According to an aspect, there is provided a computer-implemented segmentation method (100, 210), the method comprising: performing a first automated segmentation operation (400) on one or more first images of a subject area to automatically determine a first segmentation map of the subject area, wherein the one or more first images are generated using a first technique; performing, at least partially based on the first segmentation map, a second automated segmentation operation (600) on one or more second images of the subject area to automatically determine a second segmentation map of the subject area, wherein the one or more second images of the subject area are generated using a second technique different from the first technique, the first and second imaging techniques to capture different properties of the subject area; automatically determining a mismatch between segmented portions of the first and second segmentation maps.
Claims
1. A computer-implemented segmentation method, the method comprising: performing a first automated segmentation operation on one or more first images of a subject area to automatically determine a first segmentation map of the subject area, wherein the one or more first images are generated using a first technique; performing, at least partially based on the first segmentation map, a second automated segmentation operation on one or more second images of the subject area to automatically determine a second segmentation map of the subject area, wherein the one or more second images of the subject area are generated using a second technique different from the first technique, the first and second imaging techniques to capture different properties of the subject area; automatically determining a mismatch between segmented portions of the first and second segmentation maps, wherein the second segmentation map is determined using a region growing procedure to grow regions around seed locations within one or more of the regions of interest, based one or more predetermined region growing criteria.
2. The method of claim 1, wherein performing the first segmentation operation comprises: automatically applying one or more thresholds to: values of pixels within the one or more first images; or values of elements within one or more maps determined based on the one or more first images, to determine a plurality of zones within the one or more first images or maps; and providing the zones of the one or more first images or maps as separate inputs to a procedure for determining the first segmentation map.
3. The method of claim 1, wherein performing the second segmentation operation at least partially based on the first segmentation map comprises: identifying one or more regions of interest within the one or more second images at least partially based on the first segmentation map; and selectively utilizing information specifying the one or more regions of interest in the second segmentation operation.
4. The method of claim 1, wherein the seed locations are selected at least partially based on information within and/or derived from the first and/or second images at the seed locations.
5. The method of claim 1, wherein generating an estimated second segmentation map comprises: selecting a plurality of seed locations; and expanding regions around the seed locations to identify segmented portions using a reinforcement learning model.
6. The method of claim 1, wherein the first image is a perfusion weighted image, wherein the first segmentation operation is to segment a portion of the subject area comprising a lesion, captured within the perfusion weighted image.
7. The method of claim 1, wherein the second image is a diffusion weighted image, wherein the second segmentation operation is to segment a portion of the subject area comprising an infarction captured within the diffusion weighted image.
8. The method of claim 1, wherein the method further comprises: predicting a rate of change of a segmented region within the first and/or second segmentation map over time based on one or more of the first image, the second image, the first segmentation map and the second segmentation map; and generating a timeline of predicted change of the segmented region.
9. The method of claim 8, wherein the method comprises predicting a rate of change of the mismatch; and generating a timeline of predicted rate of change of the mismatch.
10. The method of claim 1, wherein the method comprises determining a first map of a first property within the subject area based on the one or more first images, wherein the first segmentation operation is performed based on the first map.
11. A non-transitory computer readable medium storing instructions which, when executed by one or more processors, cause the one or more processors to: perform a first automated segmentation operation on one or more first images of a subject area to automatically determine a first segmentation map of the subject area, wherein the one or more first images are generated using a first technique; perform, at least partially based on the first segmentation map, a second automated segmentation operation on one or more second images of the subject area to automatically determine a second segmentation map of the subject area, wherein the one or more second images of the subject area are generated using a second technique different from the first technique, the first and second imaging techniques to capture different properties of the subject area; automatically determine a mismatch between segmented portions of the first and second segmentation maps, wherein the second segmentation map is determined using a region growing procedure to grow regions around seed locations within one or more of the regions of interest, based one or more predetermined region growing criteria.
12. An image segmentation system, the system comprising a processor and a memory storing computer readable instructions which, when executed by the processor cause the processor to: perform a first automated segmentation operation based on one or more first images of a subject area to automatically determine a first segmentation map of the subject area, wherein the one or more first images are generated using a first technique; perform, at least partially based on the first segmentation map, a second automated segmentation operation based on one or more second images of the subject area to automatically determine a second segmentation map of the subject area, wherein the one or more second images of the subject area are generated using a second technique different from the first technique, the first and second imaging techniques to capture different properties of the subject area; automatically determine a mismatch between segmented portions of the first and second images based on the first and second segmentation maps; and output the determined mismatch to a user of the system, wherein determination of the second segmentation map comprises using a region growing procedure to grow regions around seed locations within one or more of the regions of interest, based one or more predetermined region growing criteria.
13. The system of claim 12, wherein the memory further stores instructions which when executed by the processor cause the processor to: predict a rate of change of a segmented region within the first and/or second segmentation map over time based on one or more of the first image, the second image, the first segmentation map and the second segmentation map; and generate a timeline of predicted change of the segmented region.
14. The system of claim 13 wherein the memory further stores instructions which when executed by the processor cause the processor to: predict a mismatch at a predetermined time after the time at which the first and/or second images were captured based on one or more of the first image, the second image, the first segmentation map and the second segmentation map; and output the predicted mismatch or a rate of change of the mismatch to a user of the system.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0029] Exemplary embodiments will now be described, by way of example only, with reference to the following drawings, in which:
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
DETAILED DESCRIPTION OF EMBODIMENTS
[0045] With reference to
[0046] The first segmentation operation may be an automated segmentation operation performed on the one or more first images. In other words, no user input (other than providing the one or more first images) may be provided to a procedure performing the first segmentation operation, in order for the first segmentation operation to provide the first segmentation map.
[0047] The first segmentation operation may be performed at least partially using a first Artificial Neural Network (ANN), such as a Convolutional Neural Network (CNN) or Region-based CNN (R-CNN). The first ANN may be of a standard architecture known in the art. Alternatively, the first ANN may be a variation of a standard architecture. The first ANN may be implemented by using a pre-trained ANN. For example, the first segmentation operation may be performed using a pre-trained U-Net network of a standard architecture. Alternatively, the first ANN may comprise any desirable ANN, which may have been trained using sequences of images within a target input domain and labels comprising segmentation maps that have been previously determined for the images within the training sequence. The ANN may be trained using back propagation based on a loss function, e.g. a dice coefficient loss function. Alternatively, the ANN may have been trained using any other suitable method.
[0048] When the first segmentation operation is performed on more than one first image, the first images may be provided to the ANN as separate input channels. Each of the first images may be pre-processed, e.g. separately pre-processed. For example, each of the first images may be separately pre-processed as described with reference to the system 200 described below. In some arrangements, the subject area may be a 3D area, i.e. a volume, and the more than one first images may comprise cross-sections through the subject area at different respective depths. In such arrangements, the first ANN may be configured to generate a 3D segmentation map of the subject area based on the first images, e.g. in which elements of the segmentation map comprise voxels comprising values indicating the region of the segmentation map that the element is included in.
[0049] The method 100 comprises a second block 104, at which a second segmentation operation is performed on one or more second images of the subject area. The second images may be captured or generated by using a second imaging technique. The second imaging technique may be different from the first imaging technique. In particular, the second imaging technique may capture a different property or properties of the subject area compared to a property or properties captured by the first imaging technique. For example, when the first imaging technique is PWI and the second imaging technique is DWI, the first images may capture a rate of diffusion of molecules within the subject area and the second images may capture, for example, a flow rate of fluid, e.g. blood, through the subject area.
[0050] The second segmentation operation may be an automated segmentation operation performed on the one or more second images. In other words, no user input (other than providing the one or more second images and the first segmentation map) may be provided to a procedure performing the second segmentation operation in order for the procedure to provide the second segmentation map.
[0051] The second segmentation operation is performed at least partially based on the first segmentation map. For example, the first segmentation map and/or information derived from the first segmentation map may be provided as an input to a procedure performing the second segmentation operation. In particular, regions of interest within the one or more second images may be determined at least partially based on the first segmentation map. Information specifying the one or more regions of interest may be utilized in the second segmentation operation, e.g. in order to focus attention of a procedure performing the second segmentation operation. The first segmentation map may at least partially define an attention map used in the second segmentation operation. For example, the first segmentation map, or regions of interest extracted from the first segmentation map, may be used as the attention map. Alternatively, the attention map may be determined at least partially based on the first segmentation map.
[0052] In one or more arrangements, e.g. as described below, an estimated second segmentation map may be generated using a region growing procedure to grow regions around seed locations within one or more of the regions of interest within the one or more second images. The second segmentation operation may be performed at least partially based on the estimated second segmentation map. For example, the one or more second images (or maps derived from the second images) and the estimated second segmentation map may be provided as separate inputs to a procedure for performing the second segmentation operation.
[0053] The second segmentation operation may be performed at least partially using a second ANN, such as a CNN. The ANN may be structured to receive the one or more second images, one or more maps derived from the second image and/or information derived from the first segmentation map, e.g. the estimated second segmentation map, and output the second segmentation map. Examples of suitable network architectures of the second ANN are described below with reference to
[0054] The second ANN may be trained using training sequences comprising images within a target input domain, information representative of the information that may be derived from a corresponding first segmentation map, and labels comprising segmentation maps that have been determined for the images within the training sequence. The network may be trained using a back propagation procedure based on a loss function, such as a dice coefficient loss function. Alternatively, the network may be trained using any other suitable training procedure.
[0055] The method 100 comprises a third block 106, at which a mismatch between segmented portions of the first and second segmentation maps is determined, e.g. automatically determined (without further input from a user). For example, the mismatch may be determined by subtracting a segmented portion of the second segmentation map (or portion thereof) from a segmented portion of the first segmentation map (or portion thereof) or vice versa. In some arrangements, the method may comprise outputting the mismatch, e.g. a magnitude of an area or volume of the mismatch, to a user.
[0056]
[0057] As illustrated, the system 200 is configured to receive first and second medical images I.sub.1, I.sub.2 of a subject area, e.g. a volume of brain tissue, as inputs to the system. As described above with reference to
[0058] The system 200 comprises a mismatch estimation block 210 at which a mismatch between anomalies detectable within the first and second medial images I.sub.1, I.sub.2 is determined, e.g. automatically determined. The function performed at the mismatch estimation block 210 may be performed using the segmentation method 100 described above. The mismatch estimation block 210 may comprise a first segmentation block 212 at which an anomaly detectable from the first medical images I.sub.1 is segmented. An anomaly detectable from PWI images may correspond to the lesion, e.g. the combination of the infarct and penumbra regions of the lesion, within the subject area. The first segmentation block 212 may comprise the first block 102 of the method 100 and the features described in relation to the first block 102 may apply equally to the first segmentation block 212 and vice versa.
[0059] With reference to
[0060] The first property maps M.sub.1 of the subject area may be derived by processing the one or more first medical images I.sub.1 using an ANN, such as a CNN. For example, the PWI images may be input to a CNN trained to process the PWI images to derive the arterial input function based on which the haemodynamic parameters may be determined to populate the first property maps M.sub.1.
[0061] The first segmentation method 300 may further comprise a third block 306, at which the one or more first medical images I.sub.1 and/or the one or more first property maps M.sub.1 of the haemodynamic parameters within the subject area are processed in order to produce the first segmentation map.
[0062]
[0063] At a second block 404 of the first segmentation operation 400, the plurality of zones of the first property maps M.sub.1 and/or first medical images I.sub.1 may be provided as inputs, e.g. at separate respective input channels, to a procedure for generating the first segmentation map. The procedure may comprise propagating the inputs through an ANN, such as a U-Net network that has been trained to segment the first medical images I.sub.1, e.g. PWI images. Providing the plurality of zones to the artificial neural network as separate input channels may assist the network in accurately segmenting the first medical images I.sub.1, by better enabling the network to give different levels of focus to each of the zones and by enabling different weights to be given to nodes operating on features extracted from each of the zones. Furthermore, by providing the plurality of zones to the ANN as separate input channels, the ANN can generate the first segmentation map automatically in a manner that is agnostic of a threshold that may have been selected by a practitioner manually interpreting the first medical images I.sub.1. Moreover, this may enable the ANN to accurately generate the first segmentation map without requiring the threshold to be selected, e.g. manually selected, by a user.
[0064] The first segmentation operation 400 may comprise a third block 406 at which one or more post-processing operations are performed on the first segmentation map generated by the procedure at the second block 404. As described above, in some arrangements, the procedure may be configured to generate a 3D segmentation map based on the first images and, at the third block 406, the 3D segmentation map may be processed, e.g. using one or more morphological operations, in order to, for example, remove holes or other artefacts from the first segmentation map. As depicted in
[0065] Returning to
[0066] With reference to
[0067] The second segmentation method 500 further comprises a second block 504, at which the second medical images may be segmented in order to generate a second segmentation map S.sub.2. As shown in
[0068]
[0069] As depicted in
[0070] With reference to
[0071] The ROI identification method 710 may further comprise a second block 714, at which a mask, e.g. to be applied to the second medial images, is determined based on the second parameter map M.sub.2, e.g. the map of ADC, derived from the second medical images. For example, the mask may be determined by identifying regions of the second parameter map in which the value of the parameters is less than a threshold value, greater than a threshold value and/or equal to a threshold value, and/or is within or outside of one or more threshold ranges. In one arrangement, the mask may be determined by identifying regions of the second parameter map M.sub.2 in which ADC is less than 600×10.sup.−6 mm.sup.2.
[0072] The ROI identification method 710 may further comprise a third block 716, at which the regions extracted at the first block 712 and regions of the second medical images defined by the mask determined at the second block 714 may be merged, in order to identify the ROI(s). For example, the regions may be merged by adding the regions, by intersecting the regions or by any other desirable method. The ROI(s) identified at the third block 716 may be output from the ROI identification method 710.
[0073] Referring now to
[0074] The ROI refinement method 720 may comprise a first block 722, at which the one or more ROI(s) are processed, e.g. by performing one or more morphological operations on the one or more ROIs identified by the ROI identification method 710, For example, a dilation operation and/or an erosion operation may be performed on the ROIs, e.g. in order to remove holes or other artefacts from the ROIs.
[0075] The ROI refinement method 720 may comprise a second block 724, at which the ROI(s) are intersected with a segmented portion of the first segmentation map S.sub.1, e.g. in order to ensure that the ROI(s) contain only regions that are within a segmented portion of the first segmentation map. In this way, information from the first medical images may be incorporated to improve the estimation of the second segmentation map S.sub.2. In an arrangement in which the second segmentation operation is being performed in order to segment an infarct from a subject area of brain tissue within a DWI image, intersecting the ROIs with the segmented region of the first segmentation map, may ensure that the ROIs are within, e.g. entirely within, the lesion that has been determined by segmenting a PWI image of the subject area. The refined ROIs generated by the ROI refinement method 720 maybe output from the second block 724.
[0076] With reference to
[0077] The first segmentation estimation method 730 may comprise a first block 732 at which a mask, e.g. to be applied to the one or more refined ROIs, is determined based on the first parameter map M.sub.1. For example, the mask may be determined by identifying regions of the first parameter map in which the value of the parameter is less than a threshold value, greater than a threshold value and/or equal to a threshold value, and/or is within or outside of one or more threshold ranges. In one arrangement, the mask may be determined by identifying regions of the first parameter map M.sub.1 in which Tmax is greater than 8 seconds.
[0078] At a second block 734 of the first segmentation estimation method 730, the mask determined at the first block 732 may be applied to, e.g. intersected with, the refined one or more ROIs, e.g. in order to identify the portions of the ROIs in which Tmax is greater than 8 seconds.
[0079] The first segmentation estimation method 730 may comprise a third block 736, at which a seed location is selected within the intersection of the refined ROIs identified at the second block 734. In some arrangements, the seed may be selected using a randomized process. In a fourth block 738, a region may be grown around the selected seed location based on values of the elements, e.g. pixels, of the second medical images I.sub.2. For example, the values of the elements of the second medical images I.sub.2 adjacent to the seed location (or elements already added to the region being grown) may be compared to a value at the seed location (or element already added to the region being grown) based on one or more region growing criteria, and the elements may be added to the region being grown if the one or more region growing criteria are met.
[0080] The third and fourth blocks 736, 738 of the first segmentation estimation method 730 may be repeated a number of times, e.g. based on the input n, in order to grow a desirable number of regions within the refined ROIs.
[0081] The combination of the regions grown during each repetition of the third and fourth block 736, 738 may be output from the first segmentation estimation method 730 (and the third block 616 of the first part 610 of the second segmentation operation 600) as the estimated second segmentation map S.sub.2E.
[0082] With reference to
[0083] The second segmentation estimation method 740 may comprise a first block 742, at which one or more seed locations are identified within one or more refined ROIs. The determined, e.g. optimally determined, ROIs from seed locations are grown to determine potential infarcts. The seed location may be determined using a reinforcement learning procedure, which may be configured to select seed locations in order to optimize regions grown, based on any desirable optimization criteria selected in order to limit a distance metric compared with ground truth data. The seed location may be selected by the reinforcement learning model at least partially based on the second parameter map M.sub.2. A reward function may be formulated to select the seeds, e.g. optimal seeds, in such a way that multiple seeds are navigated to the optimal region of interest with any desirable distance based optimization function, with the error being minimized when the seed position is optimally selected. In one or more arrangements, the seed locations may be identified or selected using a Deep Q-learning network of a standard architecture.
[0084] The second segmentation estimation method 740 may further comprise a second block 744, at which a region is grown around the selected seed location based on values of the elements, e.g. pixels, of the second medical images I.sub.2. The region around the selected seed location may be grown using the region growing procedure used at the fourth block 738 of the first segmentation estimation method 730. Alternatively, the region around the selected seed location may be grown using an alternative region growing procedure, e.g. applying different region growing criteria.
[0085] The first and second blocks 742, 744 of the second segmentation estimation method 740 may be repeated a number of times, e.g. based on the input n, in order to grow a desirable number of regions within the one or more refined ROI(s).
[0086] The combination of the regions grown during each repetition of the first and second blocks 742, 744 may be output from the second segmentation estimation method 740 (and the third block 616 of the second segmentation operation 600) as the estimated second segmentation map S.sub.2E.
[0087] Returning to
[0088] With reference to
[0089] The first ANN 810 may comprise a second portion 814 configured to receive the intermediate segmentation map and the second medical images I.sub.2 and perform one or more morphological operations such as a dilation operation, on the inputs, e.g. to produce dilated masks of the inputs.
[0090] The first ANN 810 may further comprise a third portion 816 comprising one or more layers, e.g. feature extraction layers, such as convolutional layers, configured to receive the dilated mask of the intermediate segmentation map and the second medical images, the second medical images I.sub.2 and the estimated second segmentation map S.sub.2E, and perform feature extraction on the inputs.
[0091] The first ANN 810 may further comprise a fourth portion 818 comprising one or more layers, e.g. fully connected layers, configured to receive the output from the third portion 816, e.g. from the feature extraction layers, and generate the second segmentation map S.sub.2. The second segmentation map may be output from an output layer of the fourth portion 818.
[0092] The first ANN 810 may be trained end to end. Alternatively, one or more of the portions of the first ANN may be trained individually, e.g. using suitable training data reflecting the part of the segmentation process being performed by the particular portion being trained.
[0093] With reference to
[0094] The second ANN 820 may comprise a second portion 824 comprising one or more layers, such as one or more convolutional layers and/or one or more fully connected layers, configured to receive the estimated second segmentation map S.sub.2E and the second property map M.sub.2 as inputs and generate a second intermediate segmentation map.
[0095] The second ANN 820 may comprise a third portion 826 comprising one or more layers, such as one or more convolutional layers and/or one or more fully connected layers, configured to receive the estimated second segmentation map S.sub.2E, the first intermediate segmentation map and the second intermediate segmentation map and to merge the first and second segmentation maps to generate the second segmentation map. The second segmentation map may be output from an output layer of the third portion 826.
[0096] The second ANN 820 may be trained end to end. Alternatively, one or more of the portions of the second ANN may be trained individually, e.g. using suitable training data reflecting the part of the segmentation process being performed by the particular portion being trained.
[0097] With reference to
[0098] The third ANN 830 may comprise a second portion 834 comprising one or more layers, e.g. fully connected layers, configured to receive the output of the feature extraction layers of the first portion 832 and generate the second segmentation map S.sub.2. The second segmentation map may be output from an output layer of the second portion 834.
[0099] The third ANN 830 may be trained end to end. Alternatively, one or more of the portions of the third ANN may be trained individually, e.g. using suitable training data reflecting the part of the segmentation map generation process being performed by the particular portion.
[0100] Returning the
[0101] The mismatch determined at the mismatch estimation block 210 may correspond to a mismatch at a time at which the first and second medical image I.sub.1, I.sub.2 were determined. It may be desirable to determine, e.g. predict, a mismatch at a predetermined period of time after the time at which the first and second medical images were captured. However, it may be undesirable to repeat the process of capturing the first and second medical images.
[0102] The system 200 may further comprise a future mismatch prediction block 220 at which a predicted first segmentation map, a predicted second segmentation map and/or a predicted mismatch at a predetermined time after the time at which the first and/or second images were captured is determined. The predicted first segmentation map, predicted second segmentation map and/or predicted mismatch may be determined based on one or more of the first medical images I.sub.1, the second medical images I.sub.2, the first segmentation map S.sub.1, the second segmentation map S.sub.2 and the mismatch.
[0103] In some arrangements, the predicted first segmentation map, predicted second segmentation map and/or predicted mismatch may be determined using a Generative Adversarial Network (GAN) having any desirable structure that has been trained to generate the predicted first segmentation map, predicted second segmentation map and/or predicted mismatch. The GAN may be trained using any desirable training method, such as an unsupervised learning method, a semi-supervised learning method or a fully supervised learning method. For example, the GAN may be trained using a training sequence comprising example first and second segmentation maps that have been determined from first and second medical images captured at a first time, and labels comprising first and second segmentation maps that have been generated based on first and second medical images captured at a second time that is a predetermined period of time after the first time.
[0104] In some arrangements, the mismatch prediction block 220 may be configured to determine a plurality of predicted first segmentation maps, predicted second segmentation maps and/or a predicted mismatches at a plurality of predetermined times after the time at which the first and/or second images were captured, in order to establish a timeline of the progression of the first segmentation map, the second segmentation map and/or the mismatch.
[0105] The system 200 may further comprise a mismatch alert block 230 configured to inform a user of the system 200 of a current mismatch value and/or one or more values of future predicted mismatch values from the timeline of mismatch values determined at the mismatch prediction block. In some arrangements, the mismatch alert block 230 may be configured to generate an alert for the user of the system 200 if the current mismatch or a future estimated mismatch, e.g. a size of the mismatch or a future estimated mismatch, is below a threshold value or a rate of change of the mismatch, e.g. the size of the mismatch, is above a threshold rate.
[0106] With reference to
[0107] In the example system 900 depicted in
[0108] With reference to
[0109] The system 1000 may further comprise one or more workstations 1020 and/or one or more computing devices hosting one or more web applications 1030, which may be operatively connected to the PACS server 1002 and/or the imaging devices 1010. As indicated at 1100, one or more of the component devices of the system 1000 may comprise the system 900 or may be otherwise configured to perform the functions of the system 900, e.g. using software provided at least partially on the component. The component devices of the system 1000 may be configured to perform the functions, e.g. all of the functions, of the system 900 independently. In some arrangements, the component devices of the system 1000 may be configured to operate together with other component devices of the system, e.g. in a client/server mode, to perform the functions of the system. In some arrangements, the functions of the system 900 may be performed by a plurality of microservices operating on one or more of the component devices of the system 1000.
[0110] The system 1000 may further comprise, or be operatively connected to, a cloud computing server 1040, which may comprise the system 900 or be configured to perform one or more the functions of the system 900, e.g. independently of, or in conjunction with, the other component devices of the system 1000. The cloud server 1040 may be operatively connected to the PACS server 1002. Additionally or alternatively, the cloud server 1040 may be operatively connected to other component devices of the system 1000. In some arrangements, the cloud server 1040 may be configured to host the web application 1030.
[0111] Variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the principles and techniques described herein, from a study of the drawings, the disclosure and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. A computer program may be stored or distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.