AUTOMATED ASSESSMENT OF ENDOSCOPIC DISEASE

20230047100 · 2023-02-16

    Inventors

    Cpc classification

    International classification

    Abstract

    The application relates to devices and methods for analysing a colonoscopy video or a portion thereof, and for assessing the severity of ulcerative colitis in a subject by analysing a colonoscopy video obtained from the subject. Analysing a colonoscopy video comprises using a first deep neural network classifier to classify image data from the subject colonoscopy video or portion thereof into at least a first severity class (more severe endoscopic lesions) and a second severity class (less severe endoscopic lesions), wherein the first deep neural network has been trained at least in part in a weakly supervised manner using training image data from a plurality of training colonoscopy videos, the training image data comprising multiple sets of consecutive frames from the plurality of training colonoscopy videos, wherein frames in a set have the same severity class label. Devices and methods for providing a tool for analysing colonoscopy videos are also described.

    Claims

    1. A method of analyzing a colonoscopy video or a portion thereof, the method comprising: using a first deep neural network classifier to classify image data from the colonoscopy video or portion thereof into at least one of a first severity class and a second severity class, the first severity class being associated with more severe endoscopic lesions than the second severity class, wherein the first deep neural network has been trained at least in part in a weakly supervised manner using training image data from a plurality of training colonoscopy videos, wherein the training image data comprises multiple sets of consecutive frames from the plurality of training colonoscopy videos, wherein each of the frames in a set have the same has a severity class label, and wherein all of the frames in a given set have the same severity class label.

    2. The method of claim 1, wherein the method further comprises using a second deep neural network classifier to classify the image data from the colonoscopy video or portion thereof into at least one of a first quality class and a second quality class, wherein the first quality class is associated with better quality images than the second quality class, wherein image data in the first quality class is provided to the first deep neural network classifier, or wherein the second deep neural network classifier has been trained at least in part in a weakly supervised manner using training image data from a plurality of training colonoscopy videos, wherein the training image data comprises multiple sets of consecutive frames from the plurality of training colonoscopy videos, wherein frames in a set include a quality class label, and wherein each set of consecutive frames in the training image data has been assigned a quality class label by visual inspection of the segment of video comprising the respective set of consecutive frames.

    3. The method of claim 1, wherein the frames in each set of frames in the training image data correspond to a single anatomical section of a colon depicted in the colonoscopy video or portion thereof.

    4. The method of claim 1, wherein each set of frames in the training image data has been assigned a first severity class label if visual inspection associated the segment of training colonoscopy video comprising the set of frames with an endoscopic severity score within a first range, or wherein each set of frames in the training image data has been assigned a first severity class label if two independent visual inspections associated the segment of training colonoscopy video comprising the set of frames with an endoscopic severity score within a first range.

    5. The method of claim 4, wherein the endoscopic severity score is the Mayo Clinic endoscopic subscore (MCES), and wherein the first range is MCES>1 or MCES>2.

    6. The method of claim 1, wherein the first deep neural network classifier classifies image data into three or more severity classes, wherein each set of frames in the training image data has been assigned one of the three or more severity class labels if visual inspection associated the segment of training colonoscopy video comprising the set of frames with an endoscopic severity score within a predetermined distinct range for each of the three or more severity classes, or wherein each set of frames in the training image data has been assigned one of the three or more severity class labels if two independent visual inspections associated the segment of training colonoscopy video comprising the set of frames with an endoscopic severity score within a range associated with the one severity class label, and wherein the endoscopic severity score is the MCES, and the first deep neural network classifier classifies image data into four severity classes, each severity class of the four severity classes being associated with a different MCES.

    7. The method of claim 2, wherein the image data from the colonoscopy video or portion thereof comprises multiple consecutive frames, and wherein analyzing the colonoscopy video or portion thereof comprises using the first deep neural network classifier or the second deep neural network classifier to individually classify the multiple frames in the image data from the colonoscopy video or portion thereof.

    8. The method of claim 7, wherein classifying individual frames comprises providing, for each of the multiple frames, a probability of the frame belonging to the first severity class or a probability of the frame belonging to the second severity class.

    9. The method of claim 1, wherein analyzing the colonoscopy video or portion thereof further comprises assigning a summarized severity class for the colonoscopy video or portion thereof based on the individual classification from the first deep neural network classifier for the multiple frames, or wherein classifying individual frames comprises providing, for each of the multiple frames, a probability of the frame belonging to the first severity class, and assigning a summarized severity class for the colonoscopy video or portion thereof based on the individual classification for the multiple frames comprises: assigning the first severity class if the average of the probabilities of the frames belonging to the first severity class is above a threshold or assigning the first severity class if the proportion of frames assigned to the first severity class is above a threshold.

    10. The method of claim 2, wherein one or more of the first deep neural network classifier, the second deep neural network classifier, or a third deep neural network classifier comprises a convolutional neural network (CNN), wherein the CNN has been pre-trained on unrelated image data, wherein the CNN is a 50 layers CNN, or wherein the CNN is a CNN that has been pre-trained using a deep residual learning framework.

    11. The method of claim 1, wherein the endoscopic lesions are indicative of ulcerative colitis (UC), or wherein the first severity class is associated with more severe UC than the second severity class.

    12. (canceled)

    13. The method of claim 11, wherein analyzing the colonoscopy video or portion thereof, comprises: (i) analyzing the colonoscopy video or portion thereof using the first deep neural network classifier that classifies image data between a severity class corresponding to MCES>1 and a severity class corresponding to MCES≤1; (ii) analyzing the colonoscopy video or portion thereof using the first deep neural network classifier that classifies image data between a severity class corresponding to MCES>2 and a severity class corresponding to MCES≤2; (iii) selecting the subject for treatment for UC with a first treatment if at least one segment of the video is assigned the severity class corresponding to MCES>2, and with a second treatment if no segment of the video is assigned the severity class corresponding to MCES>2, but at least one a segment of the video is assigned the severity class corresponding to MCES>1; and (iv) treating the subject based on the selection in step (iii).

    14. A method of providing a tool for analyzing a colonoscopy video or a portion thereof, the method comprising: obtaining training image data comprising multiple sets of consecutive frames from a plurality of training colonoscopy videos, wherein each of the frames in a set includes a severity class label selected from at least one of a first severity class and a second severity class, wherein the first severity class is associated with more severe endoscopic lesions than the second severity class; and training a first deep neural network to classify image data into at least one of the first severity class and the second severity class, wherein the training is performed in a weakly supervised manner using the training image data and the severity class label; wherein the method further comprises using a second deep neural network classifier to classify training image data comprising multiple frames into at least one of a first quality class and a second quality class, wherein the first quality class is associated with better quality images than the second quality class, and wherein training the first deep neural network classifier is performed using the training image data that is classified in the first quality class by the second deep neural network classifier.

    15. A system for assessing the severity of ulcerative colitis in a subject from a colonoscopy video obtained from the subject, the system comprising: at least one processor; and at least one non-transitory computer readable medium containing instructions that, when executed by the at least one processor, cause the at least one processor to perform operations comprising: classifying image data from the colonoscopy video or portion thereof into at least one of a first severity class and a second severity class, wherein the first severity class is associated with more severe ulcerative colitis than the second severity class, wherein a first deep neural network has been trained at least in part in a weakly supervised manner using training image data from a plurality of training colonoscopy videos, wherein the training image data comprises multiple sets of consecutive frames from the plurality of training colonoscopy videos, and wherein each of the frames in a set includes a severity class label, and wherein all of the frames in a given set have the same severity class label.

    16. The system of claim 15, wherein the operations further comprises using a second deep neural network classifier to classify the image data from the colonoscopy video or portion thereof into at least one of a first quality class and a second quality class, wherein the first quality class is associated with better quality images than the second quality class, wherein image data in the first quality class is provided to the first deep neural network classifier, or wherein the second deep neural network classifier has been trained at least in part in a weakly supervised manner using training image data from a plurality of training colonoscopy videos, wherein the training image data comprises multiple sets of consecutive frames from the plurality of training colonoscopy videos, wherein frames in a set include a quality class label, and wherein each set of consecutive frames in the training image data has been assigned a quality class label by visual inspection of the segment of video comprising the respective set of consecutive frames.

    17. The system of claim 15, wherein the frames in each set of frames in the training image data correspond to a single anatomical section of a colon depicted in the colonoscopy video or portion thereof.

    18. The system of claim 15, wherein each set of frames in the training image data has been assigned a first severity class label if visual inspection associated the segment of training colonoscopy video comprising the set of frames with an endoscopic severity score within a first range, or wherein each set of frames in the training image data has been assigned a first severity class label if two independent visual inspections associated the segment of training colonoscopy video comprising the set of frames with an endoscopic severity score within a first range.

    19. The system of claim 18, wherein the endoscopic severity score is the Mayo Clinic endoscopic subscore (MCES), and wherein the first range is MCES>1 or MCES>2.

    20. The system of claim 15, wherein the first deep neural network classifier classifies image data into three or more severity classes, wherein each set of frames in the training image data has been assigned one of the three or more severity class labels if visual inspection associated the segment of training colonoscopy video comprising the set of frames with an endoscopic severity score within a predetermined distinct range for each of the three or more severity classes, or wherein each set of frames in the training image data has been assigned one of the three or more severity class labels if two independent visual inspections associated the segment of training colonoscopy video comprising the set of frames with an endoscopic severity score within a range associated with the assigned one severity class label, and wherein the endoscopic severity score is the MCES, and the first deep neural network classifier classifies image data into four severity classes, each severity class of the four severity classes being associated with a different MCES.

    Description

    BRIEF DESCRIPTION OF THE FIGURES

    [0150] FIG. 1 shows an exemplary computing system in which embodiments of the present invention may be used.

    [0151] FIG. 2 is a flow chart illustrating a method of providing a tool for analysing colonoscopy videos, and a method of analysing a colonoscopy video.

    [0152] FIG. 3 illustrates schematically a method of providing a tool for analysing colonoscopy videos, and a method of analysing a colonoscopy video.

    [0153] FIG. 4 illustrates schematically a method of assessing the severity of ulcerative colitis in a subject.

    [0154] FIGS. 5A and 5B show receiver operator characteristic (ROC) curves for an exemplary deep neural network classifying image data from colonoscopy videos in a first severity class corresponding to MCES>1 and a second severity class corresponding to MCES≤1 (FIG. 5A), and an exemplary deep neural network classifying image data from colonoscopy videos in a first severity class corresponding to MCES>2 and a second severity class corresponding to MCES≤2 (FIG. 5B).

    [0155] Where the figures laid out herein illustrate embodiments of the present invention, these should not be construed as limiting to the scope of the invention. Where appropriate, like reference numerals will be used in different figures to relate to the same structural features of the illustrated embodiments.

    DETAILED DESCRIPTION

    [0156] Specific embodiments of the invention will be described below with reference to the Figures.

    [0157] FIG. 1 shows an exemplary computing system in which embodiments of the present invention may be used. A computing device 1 comprises one or more processors 101 (which can be in the form of e.g. a server) and one or more memories 102. The computing device 1 may be equipped with means to communicate with other elements of a computing infrastructure, for example via the public internet. The computing device 1 may also comprise a user interface, which typically includes a display. In the illustrated embodiment, the computing device 1 is able to communicate with one or more external databases 202, for example to obtain training image data, receive a colonoscopy video or portion thereof to be analysed, and/or output the results of an analysis of a colonoscopy video for storage. Depending on the particular implementation, database 202 may in practice be included in database 102. In embodiments, the first computing device 1 is configured to provide a tool for analysing colonoscopy videos, as will be described further below, using training data received from database 202. In such embodiments, the memory 101 may store instructions that, when executed by the processor 102, cause the processor to execute the steps of a method of providing a tool for analysing a colonoscopy video as described herein. In some such embodiments, the tool once obtained may be provided to a second computing device (not shown) in the form of a set of instructions that, when executed by a processor of the second computing device, cause the processor to execute the steps of a method of analysing a colonoscopy video as described herein. The second computing device may comprise a user interface, such as e.g. a display, through which it can output the results of the analysis to a user. In embodiments, the first computing device 1 is configured to analyse a colonoscopy video or a portion thereof (e.g. received from database 202), and/or to automatically assess the severity of ulcerative colitis in a subject by analysing a colonoscopy video or portion thereof from the subject. In such embodiments, the memory 101 may store instructions that, when executed by the processor 102, cause the processor to execute the steps of a method of analysing a colonoscopy video, and optionally assess the severity of ulcerative colitis, as described herein.

    [0158] FIGS. 2 and 3 show general embodiments of a method of providing a tool for analysing colonoscopy videos, and a method of analysing a colonoscopy video. As will be explained further below, a method of providing a tool for analysing colonoscopy videos may comprise steps 210/310 (obtain (training) image data, including step 212 (obtain raw colonoscopy videos) and optionally steps 214-218 (annotate videos with anatomical section labels, quality class labels and/or severity class labels)), and step 240/340 (training severity scoring network, including steps 242 (obtain severity class prediction for each frame), 246 (compare severity class prediction with severity class labels) and optionally step 244 (obtain summarised severity class prediction)). Optionally, such a method may additionally comprise steps 220/320 (train quality control network) and 230/330 (use quality control network to filer low quality frames). A method of analysing a colonoscopy video may comprise steps 210/310 (obtain image data for analysis, including step 212 (obtain raw colonoscopy video) and optionally step 218 (annotate video with anatomical section labels)), optionally step 230/330 (use quality control network to filter low quality frames), and step 250/350 (use severity scoring network to obtain severity class prediction, including step 252 (obtain severity class prediction for each frame) and optionally step 254 (obtain summarised severity class prediction)). In other words, the training steps 220/320 (train quality control network) and 240/340 (train severity scoring network), and the associated steps of annotating 214, 216 training data with class labels (used for training), are performed as part of a method of providing a tool for analysing colonoscopy videos. By contrast, a method of analysing a colonoscopy video uses one or more classifiers that have been previously trained as described herein, and therefore does not necessarily require the training steps and associated labelling of training data.

    [0159] At step 210/310, image data is obtained for training (in which case the image data is referred to as “training image data”) or for analysis (in which case the image data is referred to as a colonoscopy video 300 or a portion thereof from a subject). Training image data comprises a plurality of training colonoscopy videos 300. Each colonoscopy video 300 (whether in the training data or the data for analysis) comprises a plurality of consecutive frames 300.sub.1 to 300.sub.n. In the training image data, the plurality of consecutive frames 300.sub.1 to 300.sub.n form sets that have been assigned the same severity class label S.sub.1, S.sub.2. At least two severity classes are used, where the first severity class S.sub.1 is associated with more severe endoscopic lesions and/or more severe ulcerative colitis than the second severity class S.sub.2. In the embodiment illustrated in FIG. 3, a total of two severity classes are used, and the classifier that will be trained is a binary classifier. However, in other embodiments 3 or more classes may be used, and a corresponding multiclass classifier may be trained. The step of obtaining 210 image data for training or analysis may encompass a number of steps as illustrated on FIG. 2, some of which may be optional. In the simplest case, obtaining 210 image data for analysis may simply include obtaining 212 a raw colonoscopy video 300 from a subject, to be analysed. For example, this can be obtained from a database (such as database 202), from a user submitting data for analysis, or from a computing device directly associated with an endoscope. Optionally, the raw colonoscopy video 300 for analysis may be annotated 218 to provide labels A.sub.1, A.sub.2 indicative of the anatomical section visible in each of multiple segments of the video. Alternatively, anatomical section labels A.sub.1, A.sub.2 may already be provided with the raw colonoscopy video 300 in a machine readable form. In embodiments, segments of video may be manually annotated with anatomical section labels A.sub.1, A.sub.2, and each frame 300.sub.1 to 300.sub.n may be assigned an anatomical section label corresponding to that of the segment comprising the frame 300.sub.1-300.sub.n. Alternatively, anatomical section information may be automatically extracted from the raw colonoscopy video 300. In such embodiments, annotating 214 segments of video with anatomical section labels 300.sub.1 to 300.sub.n comprises automatically assigning an anatomical section label A.sub.1, A.sub.2 to each frame 300.sub.1 to 300.sub.n of the video 300. For example, a raw colonoscopy video 300 may be provided with graphical labels indicative of the anatomical section visible on the frame, embedded in each frame. These may be based e.g. on manual annotation of segments of the video e.g. by a medical expert at the time of acquisition. In such embodiments, annotating 214 segments of video with anatomical section labels may comprise assigning an anatomical label A.sub.1, A.sub.2 to each frame using a deep neural network classifier (also referred to herein as third deep neural network classifier) that has been trained to classify image data (i.e. a collection of frames 300.sub.1 to 300.sub.n) into multiple anatomical classes, each class corresponding to an anatomical section, based at least in part on the information in the graphical label embedded in the image data. Graphical labels of a limited diversity (e.g. a few different strings of text corresponding to sections of the colon such as sigmoid, rectum and descending colon) are relatively easy patterns for a deep neural network to discriminate between, and as such the error rate for this classification is expected to be very low (e.g. close to 0% or even 0% of the frames misclassified). In embodiments, a raw colonoscopy video 300 for analysis may relate to a single anatomical section, or may be divided into portions that relate to single anatomical sections, where each portion can be analysed separately as described herein. In such embodiments, no annotation with anatomical section labels may be used, an each portion of video may form its own segment.

    [0160] Similarly, obtaining 210/310 training image data comprises obtaining 212 a plurality of raw colonoscopy videos 300. These videos can for example be obtained from one or more database (such as database 202). The raw colonoscopy videos 300 for training may optionally be annotated 218 to provide anatomical labels A.sub.1, A.sub.2 indicative of the anatomical section, as described above. The training videos 300 are accompanied with severity information provided by experts and assigned to segments of the videos. The severity information in the training image data will be used, directly or indirectly, to train a severity-based classifier (referred to as severity scoring network, SSN), as will be described further below. In practice, a segment of video 300 is a set of frames 300.sub.1- 300.sub.n and as such all frames in a set have the same severity information. In the embodiment illustrated in FIG. 3, segments correspond to anatomical sections, such that each segment of a training video 300 comprises frames that have the same anatomical label A.sub.1, A.sub.2 and the same severity information. In particular, a first segment comprises frames 300i to 300.sub.4 and a second segment comprises frames 300.sub.5 to 300.sub.n in the embodiment shown on FIG. 3. The severity information typically comprises an endoscopic severity score. Alternatively, the severity information may comprise information that can be converted into a severity score (such as e.g. free text that can be converted to a severity score using pre-established rules). For example, the endoscopic severity score may be the Mayo Clinic endoscopic subscore (MCES). The MCES is a standard scale for scoring endoscopic lesions by visual assessment of colonoscopy videos. An MCES is commonly provided as part of the clinical assessment of an endoscopy video such as e.g. in clinical trials.

    [0161] The raw colonoscopy videos 300 for training may optionally be annotated 214 with a severity class label S.sub.1, S.sub.2 for each segment of video that will form part of the training data. In embodiments, this comprises converting the severity information into classes S.sub.1, S.sub.2. For example, this may be advantageous where the severity information is in a format that is not directly compatible with the classifier to be trained. This may be the case e.g. where the severity information is not in the form of a discrete class or scale, where the number of classes is not equal to the number of classes that the classifier will be trained to discriminate, and/or where the severity information comprises assessments from more than one expert. Conversely, this step may not be necessary where the severity information is already in a format suitable for use in training the chosen classifier.

    [0162] In embodiments, a set of frames (e.g. frames 300.sub.1 to 300.sub.4 on FIG. 3) in the training image data may be annotated 214 with a severity class label by assigning a first severity class label S.sub.1 if visual inspection associated the segment of training colonoscopy video comprising the set of frames with an endoscopic severity score within a first range (i.e. if the severity information comprises a severity score within a first range). In other words, a set of frames (e.g. frames 300.sub.1 to 300.sub.4 on FIG. 3) in the training image data may be annotated 214 with a severity class label by assigning a first severity class label S.sub.1 if the severity information for the segment of training colonoscopy video comprising the set of frames comprises an endoscopic severity score within a first range (i.e. if the severity information comprises a severity score within a first range). Where the severity information comprises assessments from more than one expert, a set of frames (e.g. frames 300.sub.1 to 300.sub.4 on FIG. 3) in the training image data may be annotated 214 with a severity class label by assigning a first severity class label if two independent visual inspections associated the segment of training colonoscopy video comprising the set of frames with an endoscopic severity score within a first range. In other words, a set of frames (e.g. frames 300.sub.1 to 300.sub.4 on FIG. 3) in the training image data may be annotated 214 with a severity class label by assigning a first severity class label if the severity information for the segment of training colonoscopy video comprising the set of frames comprises two independent endoscopic severity scores within a first range. In embodiments, a set of frames (e.g. frames 300.sub.1 to 300.sub.4 on FIG. 3) in the training image data may be annotated 214 with a severity class label by assigning a first severity class label S.sub.1 if two independent visual inspections associated the segment of training colonoscopy video 300 comprising the set of frames with the same endoscopic severity score, and the endoscopic severity score is within a first range. In other words, a set of frames (e.g. frames 300.sub.1 to 300.sub.4 on FIG. 3) in the training image data may be annotated 214 with a severity class label by assigning a first severity class label if the severity information for the segment of training colonoscopy video comprising the set of frames comprises two independent endoscopic severity scores within a first range, and the two independent endoscopic severity scores are the same. Sets of frames in the training image data that do not satisfy the criteria to be annotated with a severity class label (for example because the severity information does not comprise two independent severity scores in agreement, i.e. identical or within the same range) may be excluded from the training data. A similar approach may be implemented for further severity class labels and corresponding ranges of severity scores. For example, a second severity class label S.sub.2 may be assigned if the severity information comprises an endoscopy score (or two independent endoscopy scores) within a second range. In the particular case where a binary classifier will be trained, sets of frames 300.sub.1 to 300.sub.n in the training image data may be annotated 214 with a severity class label by assigning a first severity class label Si if visual inspection associated the segment of training colonoscopy video comprising the set of frames with an endoscopic severity score above (or at least as high as) a threshold (i.e. if the severity information comprises a severity score above a threshold). Similarly, a set of frames in the training image data may be annotated 214 with a severity class label by assigning a second severity class label S.sub.2 if visual inspection associated the segment of training colonoscopy video comprising the set of frames with an endoscopic severity score at or below (or below, if class S.sub.1 includes data with an endoscopic severity score at least as high as the threshold) the threshold. Criteria on agreement between multiple scores may also optionally be used in this case, as explained above. For example, where the endoscopic severity score is the MCES, a useful threshold may be MCES=1 or MCES=2, equivalent to a first range MCES>1 or MCES>2. Similar approaches may be implemented where the classifier will be trained to classify image data in three or more severity classes. In particular, a set of frames in the training image data may be annotated 214 with a severity class label by assigning one of three or more severity class labels if visual inspection associated the segment of training colonoscopy video comprising the set of frames with an endoscopic severity score within a predetermined distinct (i.e. non overlapping) range for each of the three or more severity classes. Criteria on agreement between multiple independent scores may also optionally be used in such cases, as explained above.

    [0163] In embodiments, the severity information is in the form of a discrete class (such as e.g. one of the four levels of the MCES scale), and the classifier will be trained to classify image data into classes corresponding to the different discrete classes used in the severity information (such as e.g. four severity classes, each severity class corresponding to a different MCES). While no aggregation is required in such embodiments to convert the discrete classes in the severity information into classes suitable for training the classifier, the step of annotating 214 the videos 300 for training with a severity class label S.sub.1, S.sub.2 for each segment of video 300 that will form part of the training data may still be performed in some embodiments. For example, where the severity information comprises multiple scores e.g. provided by independent experts, a single class label may be derived from the severity information. In some such embodiments, a set of frames in the training image data may be annotated 214 with a severity class label by assigning a first (respectively second, third, etc. depending on the number of classes) severity class label if two independent visual inspections associated the segment of training colonoscopy video comprising the set of frames with the same, first (respectively second, third, etc.) endoscopic severity score.

    [0164] At optional step 216, the raw colonoscopy videos 300 for training may be annotated with a quality class label Q.sub.1, Q.sub.2 for each segment of video 300 that will form part of the training data. This may be advantageous where a classifier, preferably a deep neural network also referred to herein as quality control network (QCN) or second deep neural network classifier, is trained 220 and used 230 to filter low quality frames from the training data that will be used to train the severity-based classifier (SSN). Annotating the training image data with quality class labels Q.sub.1, Q.sub.2 (or extracting quality class labels from previously annotated data) enables such a classifier to be trained. Step 216 may be performed by manually assigning a quality class label Q.sub.1, Q.sub.2 to segments of video in the training data, based on one or more criteria. Advantageously, these criteria may apply to a visual inspection of the videos 300. For example, a first (good quality) quality class label Q.sub.1 may be assigned to a segment of training colonoscopy video 300 if the colon walls and the colon vessels can be distinguished on visual inspection, and a second quality class label otherwise. Optionally, a segment of training colonoscopy video 300 may be assigned a first quality class label Q.sub.1 if it additionally satisfies one or more visual inspection criteria based on the presence or absence of water, hyperreflective areas, stool and/or blurring, and a second quality class label Q.sub.2 otherwise. Visual inspection may be crowdsourced, and/or may be performed by non-experts. Assignment of quality class labels Q1, Q.sub.2 to training colonoscopy videos 300 may be performed using any means that enables the annotation of video files. Further, a single quality annotation may be sufficient. As the skilled person understand, when the quality annotation is performed on a segment by segment basis, segments comprise multiple consecutive frames and each such frame will inherit the label of the segment that it is part of. In cases where multiple independent quality annotations are performed, their results may be combined on a frame-by-frame basis using any appropriate scheme, such as e.g. assigning the most common quality class label for the frame across the independent quality annotations, assigning the lowest quality class label across the independent quality annotations, assigning the lowest quality class label that is represented above a threshold across the independent quality annotations, etc. As illustrated on FIG. 3, there is typically no requirement for the segments annotated with the same quality class label Q.sub.1, Q.sub.2 to correspond to the segments annotated with the same severity class label S.sub.1, S.sub.2, or anatomical label A.sub.1, A.sub.2. By contrast, segments of training videos 300 that have the same anatomical label A.sub.1, A.sub.2 may have the same severity class label S.sub.1, S.sub.2 because severity information is typically provided in a clinical setting on an anatomical section-basis (e.g. one severity score per anatomical section). In other embodiments, the raw colonoscopy videos 300 for training may be automatically annotated with a quality class label Q.sub.1, Q.sub.2 for each segment of video 300 or each individual frame that will form part of the training data. In such embodiments, previously trained classifiers (such as e.g. deep neural network classifiers) may be used to perform the annotation.

    [0165] As a result of the step 210, training image data is obtained that comprises multiple sets of frames 300.sub.1 to 300.sub.n from multiple colonoscopy videos, each frame 300.sub.1 to 300.sub.n being associated with a severity class label S.sub.1, S.sub.2 and optionally a quality class label Q.sub.1, Q.sub.2 and/or an anatomical label A.sub.1, A.sub.2. Where quality class labels Q.sub.1, Q.sub.2 are present in the training data, optional steps 220, 320 and 230, 330 may be implemented in a method of providing a tool as described herein. In step 220, a deep neural network classifier (referred to as quality control network, QCN) may be trained 220 to classify frames into corresponding quality classes. The QCN may be subsequently used 230 to filter image data (whether training image data, in the context of a method of providing a tool as described herein, or data from a colonoscopy video for analysis). The training 220 is performed in a weakly supervised manner because the quality class labels Q.sub.1, Q.sub.2 are assigned to frames based on segment-level annotations and/or are automatically assigned to segments or individual frames using previously trained classifiers. As such, these frames-quality class pairs do not represent ground truth information since there is a relatively high level of uncertainty in relation to the quality class assignment of each particular frame. Indeed, not all frames in a segment are expected to display the features that led to the assignment of the quality label to the segment containing the frame, and/or any previously trained classifier is expected to have a less than 100% accuracy in classifying previously unseen data. Once trained, the QCN can be used 230,330 to filter image data that before it is classified by the severity scoring network as will be described further below. In particular, in the embodiment illustrated on FIG. 3, the QCN takes as input individual frames 300.sub.1 to 300.sub.n of training or analysis videos 300, and produces as an output 330A a probability of the frame belonging to the first class P(Qi).sup.1- P(Q.sub.1).sup.n and/or a probability of the frame belonging to the second class. A discrete classification label 330B is derived from this in order to decide which frames are in the lower quality class and should be filtered out, as illustrated by the ticks and crosses. In practice, a discrete classification label 330B may only be implicitly derived, by excluding frames that do not satisfy the criteria applied to decide which frames are in the higher quality class. For example, a frame (e.g. frame 300.sub.1) may be considered to be classified in the first quality class by the QCN if the probability P(Q.sub.1).sup.1 of the frame belonging to the first quality class reaches or exceeds a threshold, and in the second class otherwise. The threshold can be chosen a priori or dynamically (i.e. depending on the particular instance of the QCN and data). For example, the threshold can be determined dynamically such that the sets of frames (i.e. sets of consecutive frames that have been assigned the same severity class and optionally the same anatomical section label) in the training image data contain on average between 20 and 40, preferably about 30, frames classified in the first quality class. Thresholds between 0.9 and 0.99, preferably 0.95 were found suitable.

    [0166] The (optionally quality filtered) training data is used at step 240/340 to train a severity-based deep neural network (SSN, also referred to herein as first deep neural network classifier) to classify data into severity classes, using the severity class labels S.sub.1, S.sub.2 previously obtained. The training 240/340 is performed in a weakly supervised manner because the severity class labels S.sub.1, S.sub.2 are assigned to frames based on segment-level annotations. As such, these frames-severity class pairs do not represent ground truth information since not all frames in a segment are expected to display the features that led to the assignment of the severity class label to the segment containing the frame. The trained SSN can be used 250/350 to analyse colonoscopy videos. The SSN takes as input individual frames 300.sub.1-300.sub.n and produces 242/252 as output a severity class prediction for each frame that is analysed. In particular, in the embodiment illustrated on FIG. 3, the SSN takes as input individual frames 300.sub.1- 300.sub.n of training or analysis videos 300 (or individual frames 300.sub.1-300.sub.3 and 300.sub.n−1-300.sub.n if quality-based filtering is used), and produces as an output 340A/350A a probability of the frame belonging to the first class P(S.sub.1).sup.1 to P(S.sub.1).sup.n and/or a probability of the frame belonging to the second class. In order to evaluate the performance of the SSN during training, a discrete classification label 340B/350B may be derived from these probabilities and compared 246 to the severity class annotation on a frame-by-frame basis. The discrete classification label 340B/350B may be obtained by applying a threshold on the probability or probabilities output by the SSN. For example, a frame (e.g. frame 300.sub.1) may be assigned to the first class if the probability (e.g. P(S.sub.1).sup.1) of the frame belonging to the first class is above a threshold. Frames may be assigned to the second class otherwise, or may be considered to have unknown classification if the probability of the frame being in the first class is within a range, typically a range around 0.5. Similarly, for multilevel classifiers, a frame may be assigned to the class that has the highest probability, provided that this exceeds a threshold. Alternatively, frames may be assigned to the class that has the highest probability, regardless of the actual value of the probability.

    [0167] Optional steps 244 and/or 254 may additionally be implemented wherein a summarised severity class prediction 340C/350C is obtained for segment of video, based on the predictions (340A/350A or 340B/350B) for each of the frames that make up the segment (and that have been analysed with the SSN). In the example illustrated on FIG. 3, frames that belong to the same segment are grouped together based on the anatomical section label A.sub.1. As the severity information for the training data was assigned per anatomical section, the summarised severity class prediction 340C may be directly compared with the original severity information in step 246, in order to evaluate the performance of the SSN, during training 240. Further, for the purpose of analysing a colonoscopy video, obtaining 254 a summarised severity class prediction 350C per anatomical segment may also be useful since it replicates the granularity used in conventional clinical assessments. Alternatively, for the purpose of analysing a colonoscopy video the segments based on which a summarised severity class prediction 340C/350C is obtained may be automatically determined, preferably without a priori information about the data. For example, a segment may be defined as a set of consecutive frames where all of the frames have the same discrete class assignment 350B, or do not have a discrete class assignment 350B (i.e. it is unknown or the frame was excluded for e.g. quality reasons).

    [0168] The summarised severity class 3400/3500 for a segment comprising multiple frames may be obtained 244/254 directly based on the probabilities 340A/350A output by the SSN, or based on the discrete class assignments 340B/350B derived from these probabilities. In embodiments, a summarised severity class 3400/3500 for a segment may be obtained 244/254 by assigning the first severity class to the segment if the average of the probabilities 340A/350A of the frames belonging to the first severity class output by the SSN is above a threshold. In the embodiment shown on FIG. 3, a summarised severity class 3400/3500 for the segment comprising frames 300.sub.1 to 300.sub.3 may be obtained 244/254 by assigning the first severity class to the segment if the average of the probabilities 340A/350A of the frames 300.sub.1 to 300.sub.3 belonging to the first severity class (P(Si).sup.1 to P(S.sub.1).sup.3) output by the SSN is above a threshold. The threshold may be fixed. Alternatively, the threshold may be dynamically determined based on one or more criteria selected from: a predetermined acceptable proportion of false positives, a predetermined acceptable proportion of false negatives, a predetermined minimum accuracy, a predetermined minimum precision, and/or a predetermined minimum recall. Thresholds between 0.5 and 0.9 have been found appropriate. As another example, a summarised severity class 3400/3500 for a segment may be obtained 244/254 by assigning the first severity class to the segment if the proportion of frames assigned to the first severity class is above a threshold. In the example illustrated on FIG. 3, assuming a threshold of 0.5, the summarised severity class 3400/3500 for the segment comprising frames 300.sub.1 to 300.sub.3 is assigned as the first severity class because 66% of the frames 300.sub.1 to 300.sub.3 are assigned to the first severity class. By contrast, the summarised severity class 3400/3500 for the segment comprising frames 300.sub.n−1 to 300.sub.n is assigned as the second severity class because 50% of the frames 300.sub.n−1 to 300.sub.n are assigned to the first severity class. Other schemes may be used, for example where more than two classes are used. For example, a summarised severity class 340C/350C for a segment may be obtained 244/254 by assigning to the segment the highest severity class (i.e. the most severe class) that is represented amongst the discrete class assignments 340B/350C for the frames of the segment above a threshold for the respective severity class. The threshold may be lower for higher severity classes (i.e. requiring a lower proportion of the frames in the segment assigned the discrete severity class label 340B/350B). Indeed, higher severity classes are expected to be easier to identify (and as such predictions in the higher severity class may be made with higher confidence).

    [0169] All of the deep neural network classifiers described herein are preferably convolutional neural network(s) (CNN). Advantageously, the CNNs used may have been pre-trained on unrelated image data, such as for example from the ImageNet database (http://www.image-net.orq). The present inventors have found a 50 layers CNN to be adequate for the present use, but alternative implementations including e.g. additional layers are envisaged. CNNs trained using a deep residual learning framework (He et al., Deep Residual Learning for Image Recognition, 2015, arXiv:1512.03385, available at https://arxiv.org/pdf/1512.03385.pdf and incorporated herein by reference) have been found to be particularly suitable.

    [0170] In step 246, the predictions from the SSN are compared to the corresponding severity class labels (derived from the severity information in the training data) in order to evaluate the performance of the SSN. In embodiments, evaluating the performance of the SSN (first deep neural network) comprises quantifying the area under the receiving operating characteristic curve (AUC) using validation image data. In embodiments, evaluating the performance of the SSN comprises computing the Cohen's kappa using validation image data. The validation and training image data may form part of the same data, and in particular have all of the same characteristics as described above. A particular example of this is the evaluation of the SSN by performing cross-validation using the training image data, such as e.g. 5 or 10-fold cross validation. In embodiments, evaluating the performance of the first deep neural network comprises performing 5 or 10-fold cross validation using the training image data, and quantifying the AUC or the Cohen's kappa for each split of the cross-validation. Preferably, the training image data is separated into a number of splits for cross-validation, wherein sets of frames from the same individual colonoscopy video do not appear in more than one split.

    [0171] The output of the SSN, including for example the summarised severity class 350C and/or the predictions for each frame (probabilities 350A of belonging to one or more classes or discrete class assignments 350B), may be output to a user, for example using a display. This information may be useful in assessing the severity of ulcerative colitis in a subject, particularly where the output of the SSN corresponds to endoscopic severity scores or ranges of scores, such as the MCES.

    [0172] A method of assessing the severity of ulcerative colitis in a subject will now be described by reference to FIG. 4. A colonoscopy video 400 from the subject (or a portion thereof, such as e.g. a portion that shows a single anatomical section, a portion that does not include all of the original data to reduce its size, etc.), for example obtained from a user via a user interface or from a database 202, is analysed as described above by reference to FIGS. 2 and 3. In particular, the image data from the colonoscopy video 400 (i.e. all of the frames of the colonoscopy video 400) is optionally provided to a QCN to apply 430 a quality-based filter to the frames by classifying each frame in at least a first quality class and a second quality class, where only frames in the first quality class are used for the severity assessment. The (optionally filtered) image data 435 (i.e. all or some of the frames of the colonoscopy video 400, depending on whether quality-based filtering is used) is then assessed 450A/450B for severity of ulcerative colitis using a SSN that classifies the data 435 into at least a first severity class and a second severity class, the first severity class being associated with more severe ulcerative colitis than the second severity class. In the embodiment shown on FIG. 4, two separate severity assessments 450A/450B of the image data 435 are performed: a first SSN is used in assessment 450A to classify image data between a first severity class corresponding to MCES>1 and a second severity class corresponding to MCES≤1. A second SSN is used in assessment 450B to classify image data between a first severity class corresponding to MCES>2 and a second severity class corresponding to MCES≤2. Other configurations are possible and envisaged, such as e.g. configurations that use more than two separate severity assessments with different SSNs, a single severity assessment using a multiclass SSN, one or more separate severity assessments that use SSNs classifying data into classes that correspond to values of one or more different endoscopic scoring schemes (including but not limited to MCES), etc. Each SSN produces as an output 455A/455B probabilities that each frame of the data 435 or each segment (set of frames in the data) belongs to the first severity class, or discrete severity classes derived from these probabilities. These outputs 455A/455B are used to decide the appropriate course of treatment for the subject at steps 460/470. In particular, at step 460 it is determined whether the output 455B from the second assessment 450B is indicative of a MCES>2. For example, if a summarised class prediction derived from the predictions of SSN2 for any segment of the data 435 is the first severity class, then the output 455B from the second assessment 450B may be considered to be indicative of a MCES>2. If that is the case, then the subject is selected 465 for a first treatment. If that is not the case, then it is determined at step 470 whether the output 455A from the first assessment 450A is indicative of a MCES>1. For example, if a summarised class prediction derived from the predictions of SSN1 for any segment of the data 435 is the first severity class, then the output 455A from the first assessment 450A may be considered to be indicative of a MCES>1. If that is the case, then the subject is selected 475 for a second treatment. Selecting a subject for treatment may refer to recommending the treatment and/or identifying the subject as likely to benefit from the treatment and/or identifying the subject as likely to require the particular treatment. Optionally, the method may further comprising treating the subject in accordance with the selected treatment.

    [0173] In other embodiments, a single severity assessment may be performed, such as e.g. using the second SNN (SSN2) illustrated on FIG. 4. In such embodiments, the method may comprise treating the subject for UC, or selecting the subject for treatment for UC (i.e. recommending treatment, identifying the subject as one likely to benefit from and/or require treatment) if the output from this assessment is indicative of the first severity class. This may be the case for example where the SSN classifies a set of frames from the image data 400 in the first severity class, or if any set of frames from the image data 400 has a first summarised severity class.

    [0174] As the skilled person would understand, references to using a deep neural network to classify image data (based on severity or quality) may in practice encompass using a plurality of deep neural networks and combining the predictions of the multiple deep neural networks. Each of such a plurality of deep neural networks may have the properties described herein. Similarly, references to training a deep neural network may in fact encompass the training of multiple deep neural networks as described herein, some or all of which may subsequently be used to classify image data according to quality or severity, as the case may be.

    EXAMPLES

    [0175] An exemplary method of providing a tool for analysing colonoscopy videos will now be described. Over 2000 colonoscopy videos were obtained from the HICKORY (NCT02100696) and LAUREL (NCT02165215) clinical trials—both phase III, double-blind, placebo-controlled, multicenter studies investigating the efficacy and safety of etrolizumab in the treatment of patients with moderately to severely active ulcerative colitis (UC). Each video was annotated by expert gastroenterologists as part of the clinical trials, to indicate: (1) the anatomical section (rectum, sigmoid, descending colon), and (2) MCES evaluation for each anatomical section, from two different readers. A total of 104 raw colonoscopy videos were selected by filtering out videos where the two readers did not agree on the MCES for each anatomical section, and videos where the readers flagged quality issues such as e.g. inappropriate bowel preparation, suboptimal video quality.

    [0176] Manual quality annotation of each of the 104 raw colonoscopy videos was performed by non-experts, who were asked to define segments of videos that are considered “good quality” or “bad quality”. This was performed based on the following criteria: (i) the camera is far enough from the colon walls to allow for a proper assessment, (ii) the colon walls and its vessels can be assessed at visual inspection, and (iii) visual artifacts are not present or do not occupy more than approximately 10% of the frame. Visual artifacts that were observed included: water, hyperreflective areas, stools, blurring. This was in practice performed by non-experts watching the videos and highlighting segments of good/bad quality while watching using ELAN (https://tla.mpi.nl/tools/tla-tools/elan/; Brugman, H., Russel, A. (2004). Annotating Multimedia/ Multi-modal resources with ELAN. In: Proceedings of LREC 2004, Fourth International Conference on Language Resources and Evaluation).

    [0177] The anatomical section annotation was included as a graphical label on each frame of each raw video in these clinical trials. A deep neural network (anatomical section network) was trained to classify each frame of each video into one of the three anatomical section categories (rectum, sigmoid, descending colon) by focussing on the area of the frames comprising the graphical label. This was performed by partially retraining the 50 layers convolutional neural network ResNet50, using Keras (https://keras.io/). In particular, the last 8 layers of ReNet50 were retrained using the stochastic gradient descent (SGD) optimiser as implemented in Keras. The learning rate used was 0.001 and the momentum was 0.9. The trained anatomical section network was able to assign an anatomical section with high confidence for each frame of each video. The result of this process is that for each frame of each of the 104 videos, the following 3 annotations are available: anatomical section (from the anatomical section network), quality class (from the non-expert segment-level annotation), MCES (from the expert segment level annotation). The quality class and MCES are weak labels at least because they were provided in relation to entire segments, where the multiple frames forming a segment are unlikely to all show the visual features that led to the assignment of the label. In particular, for MCES scoring, an anatomical section of colon is assigned the score that corresponds to the most severe lesions seen in the section. In other words, an entire segment of video showing the sigmoid will be assigned an MCES=3 if both readers saw signs of moderate disease activity (marked erythema, lack of vascular pattern, friability, erosions) anywhere in this anatomical section. However, some of the frames in this section may not show these signs. The MCES scoring was converted into a binary severity classification according to two different schemes. A first scheme assigned severity class label 1 to a segment of the MCES scores from the readers were >1, and a severity class label 2 otherwise. A second scheme assigned severity class label 2 to a segment of the MCES scores from the readers were >2, and a severity class label 2 otherwise

    [0178] All 104 videos included 24 frames per second, and all frames were used for training of the quality control network (QCN). A deep neural network (quality control network) was trained to classify each frame of each of the 104 raw videos into a good quality category and a bad quality category. In particular, the quality control network was trained to provide, for each frame, a probability of the frame belonging to the “good quality” class. This was performed by partially retraining the 50 layers convolutional neural network ResNet50, using Keras as explained above. In particular, the last 8 layers were retrained using SGD, a learning rate of 0.001 and a momentum of 0.9. Frames were considered to be classified as “good quality” if the predicted probability of the frame belonging to the “good quality” class (P(good)) exceeded 0.97. This threshold led to the selection of about 20 frames per raw colonoscopy video. A threshold of 0.95, leading to the selection of about 30 frames per raw colonoscopy video (about 9360 frames in total) was also tested, with similar results (not shown). Further, the AUC for the trained QCN was 0.93±0.05.

    [0179] A total of about 6200 frames predicted to be of “good quality” (according to the threshold of P(good) >0.97) were selected. All of these frames were used to separately train two deep neural networks (severity scoring networks, SSN): a first SSN used the binary severity classification labels according to the first binary scheme described above (MCES>1, MCES≤1), and a second SSN used the binary severity classification labels according to the second binary scheme described above (MCES>2, MCES≤2). As a result, the first SSN was trained to output, for each frame, a probability of the frame belonging to the first severity class MCES>1, P(MCES>1). Similarly, the second SSN was trained to output, for each frame, a probability of the frame belonging to the first severity class MCES>2, P(MCES>1). Both SSNs were trained by partially retraining the 50 layers convolutional neural network ResNet50, using as explained above. A frame was considered to be assigned to the first severity class by the first and second severity scoring networks if P(MCES>1)>0.5 and P(MCES>2)>0.5, respectively. A summary classification was computed for each anatomical section of each video (using the anatomical section label from the anatomical section network) by computing the average probability of class 1 across all frames from the same anatomical section A of the same video Y. A segment was considered to be assigned a first severity class label by the first and second severity scoring networks if average(P.sub.A, γ(MCES>1))>0.5 and average(P.sub.A,γ(MCES>2))>0.5, respectively.

    [0180] The two SSNs were evaluated retrospectively by performing five-fold cross validation using the same data that was used for training. In particular, the approx. 6200 quality selected frames were divided into training, tuning and validation sets according to the following scheme: 60%, 20%, 20%, with the additional rule that frames coming from videos from the same patient only appear in one of the sets. The ROC curve was calculated for each iteration of the 5-fold validation by varying the threshold applied to classify a segment in the first severity class, comparing the predicted class assignment and the expert-derived class assignment (binary severity classification labels derived from the expert annotations according to the binary schemes described above), and calculating the corresponding false positive and false negative rates. Corresponding areas under the curve (AUC) were also calculated for each ROC curve. An average ROC curve and corresponding standard deviation were then calculated, as well as the AUC for the average ROC curve.

    [0181] The results of these analyses are shown on FIGS. 5A and 5B, respectively for the first SSN and the second SSN. These figures show the ROC curves (true positive as a function of false positive) obtained for each of the cross-validation iterations (thin curves), the average across these (bold curve), and the ±1 standard deviation envelope (greyed area). The ROC curves shown correspond to the TPR and FPR values for each value of the above-mentioned threshold between 0 and 1, with steps of 0.05. For the first SSN, the AUC (average±standard deviation) was 0.76±0.08. For the second SSN, the AUC (average±standard deviation) was 0.81±0.06. This data shows that the fully automated colonoscopy video analysis approach was able to provide accurate prediction of MCES values for individual colon sections, using raw endoscopic videos as the sole input. This is despite the training being performed using weak labels and a relatively small amount of training data (104 raw videos). Further, MCES≤1 is typically classified as “remission” in clinical trials, and therefore a trustworthy automatic approach to identify those colonoscopy videos that show that the subject is in remission may be particularly valuable.

    [0182] A larger data set of approximately 1000 videos was subject to quality control as described above. All frames passing the quality control check and their original MCES (0 to 3) annotation were used to train a further SSN. As such, the SSN was trained to output the respective probabilities of each frame belonging to one of four class, corresponding to the four levels of the MCES scale. In some examples, an ordinal classification model was used as described in Cao et al. (Rank-consistent Ordinal Regression for Neural Networks, 2019, arXiv:1901.07884v4). The corresponding average probabilities across frames in a segment were then calculated. A single predicted MCES score was assigned to each segment as the MCES score that has the highest average probability. This SSN was evaluated by cross-validation as explained above, except that instead of calculating ROC curves and AUCs, the Cohen's kappa coefficient was calculated for each iteration of the cross-validation.

    [0183] The terms “computer system” includes the hardware, software and data storage devices for embodying a system or carrying out a method according to the above described embodiments. For example, a computer system may comprise a central processing unit (CPU), input means, output means and data storage, which may be embodied as one or more connected computing devices. Preferably the computer system has a display or comprises a computing device that has a display to provide a visual output display (for example in the design of the business process). The data storage may comprise RAM, disk drives or other computer readable media. The computer system may include a plurality of computing devices connected by a network and able to communicate with each other over that network.

    [0184] The methods of the above embodiments may be provided as computer programs or as computer program products or computer readable media carrying a computer program which is arranged, when run on a computer, to perform the method(s) described above.

    [0185] The term “computer readable media” includes, without limitation, any non-transitory medium or media which can be read and accessed directly by a computer or computer system. The media can include, but are not limited to, magnetic storage media such as floppy discs, hard disc storage media and magnetic tape; optical storage media such as optical discs or CD-ROMs; electrical storage media such as memory, including RAM, ROM and flash memory; and hybrids and combinations of the above such as magnetic/optical storage media.

    [0186] Unless context dictates otherwise, the descriptions and definitions of the features set out above are not limited to any particular aspect or embodiment of the invention and apply equally to all aspects and embodiments which are described.

    [0187] “and/or” where used herein is to be taken as specific disclosure of each of the two specified features or components with or without the other. For example “A and/or B” is to be taken as specific disclosure of each of (i) A, (ii) B and (iii) A and B, just as if each is set out individually herein.

    [0188] It must be noted that, as used in the specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Ranges may be expressed herein as from “about” one particular value, and/or to “about” another particular value. When such a range is expressed, another embodiment includes from the one particular value and/or to the other particular value. Similarly, when values are expressed as approximations, by the use of the antecedent “about,” it will be understood that the particular value forms another embodiment. The term “about” in relation to a numerical value is optional and means for example +/−10%.

    [0189] Throughout this specification, including the claims which follow, unless the context requires otherwise, the word “comprise” and “include”, and variations such as “comprises”, “comprising”, and “including” will be understood to imply the inclusion of a stated integer or step or group of integers or steps but not the exclusion of any other integer or step or group of integers or steps.

    [0190] Other aspects and embodiments of the invention provide the aspects and embodiments described above with the term “comprising” replaced by the term “consisting of” or “consisting essentially of”, unless the context dictates otherwise.

    [0191] The features disclosed in the foregoing description, or in the following claims, or in the accompanying drawings, expressed in their specific forms or in terms of a means for performing the disclosed function, or a method or process for obtaining the disclosed results, as appropriate, may, separately, or in any combination of such features, be utilised for realising the invention in diverse forms thereof.

    [0192] While the invention has been described in conjunction with the exemplary embodiments described above, many equivalent modifications and variations will be apparent to those skilled in the art when given this disclosure. Accordingly, the exemplary embodiments of the invention set forth above are considered to be illustrative and not limiting. Various changes to the described embodiments may be made without departing from the spirit and scope of the invention.

    [0193] For the avoidance of any doubt, any theoretical explanations provided herein are provided for the purposes of improving the understanding of a reader. The inventors do not wish to be bound by any of these theoretical explanations.

    [0194] Any section headings used herein are for organizational purposes only and are not to be construed as limiting the subject matter described.

    [0195] All documents mentioned in this specification are incorporated herein by reference in their entirety.