Methods and apparatus for analyzing pathology patterns of whole-slide images based on graph deep learning
12586184 ยท 2026-03-24
Assignee
- Seoul National University R&DB Foundation (Seoul, KR)
- Seoul National University Hospital (Seoul, KR)
Inventors
- Sunghoon Kwon (Seoul, KR)
- Yongju Lee (Goyang-si, KR)
- Kyoungseob Shin (Goyang-si, KR)
- Kyung Chul Moon (Seoul, KR)
- Jeong Hwan Park (Seoul, KR)
- Sohee Oh (Seoul, KR)
Cpc classification
G06V20/46
PHYSICS
G16H50/20
PHYSICS
G16H50/30
PHYSICS
G06N7/01
PHYSICS
G06V10/50
PHYSICS
G06V10/44
PHYSICS
A61B8/5223
HUMAN NECESSITIES
G06V10/84
PHYSICS
G06V10/26
PHYSICS
G06N5/045
PHYSICS
G06V10/762
PHYSICS
G06V20/69
PHYSICS
G16H50/00
PHYSICS
International classification
A61B8/00
HUMAN NECESSITIES
G06N5/045
PHYSICS
G06N7/01
PHYSICS
G06V10/26
PHYSICS
G06V10/44
PHYSICS
G06V10/50
PHYSICS
G06V10/762
PHYSICS
G06V10/84
PHYSICS
G06V20/69
PHYSICS
G16H50/00
PHYSICS
G16H50/20
PHYSICS
Abstract
The present invention relates to a method and apparatus for analyzing pathology patterns of whole-slide images based on graph deep learning, which may include: a whole-slide image (WSI) compression step of compressing WSI into a superpatch graph; a graph neural networks (GNN) analysis step of embedding node features and context features into the superpatch graph through a GNN model and calculating contributions for each node and edge; a biomarker acquisition step of classifying and grouping the superpatch graph according to the contributions for each node, connecting the classified and grouped superpatch graph in units of groups to generate connected graphs, normalizing and clustering features of the connected graphs, and acquiring environmental graph biomarkers for each group; and a diagnostic information extraction step of extracting and providing diagnostic information on the WSI based on the environmental graph biomarker for each group.
Claims
1. A method of analyzing pathology patterns of whole-slide images (WSI) using graph deep learning, the method comprising: segmenting the WSI into a plurality of small patches and extracting features of each small patch using a pretrained model; constructing a superpatch graph by grouping spatially adjacent patches with similar features into superpatches and representing each superpatch as a node, and generating edges between superpatches based on spatial information; embedding node features and spatial context features of the superpatch graph using a graph neural network (GNN) model, wherein the spatial context features comprise distance and angle between superpatches embedded via a learnable lookup table; generating a diagnostic score for the WSI using the GNN output; computing contribution scores for each node and edge in the superpatch graph using an integrated gradients method or another attribution method for interpretability; clustering nodes with contribution scores to define subgraphs representing environmental graph biomarkers; and providing diagnostic information for the WSI based on the spatial position and graph structure of the environmental graph biomarkers.
2. The method of claim 1, wherein generating the superpatch graph comprises: segmenting the WSI into N small patches; generating the superpatch graph according to a similarity of the small patches, and then using the superpatch as a node and connecting between the superpatches with an edge to generate the superpatch graph; and integrating features of the small patches in units of superpatches to calculate node features for each superpatch, and calculating edge features through spatial information between the superpatches.
3. The method of claim 1, wherein the GNN model is configured to: calculate contributions for each edge between the superpatches based on context features that reflect heterogeneous surrounding environmental conditions; and incorporate spatial information between superpatches to promote location information transfer between the superpatches.
4. The method of claim 2 or 3, wherein the spatial information includes a distance and an angle between the superpatches.
5. The method of claim 1, wherein embedding the node features and the spatial context features using the GNN model: reducing a vector dimension of the node features for each superpatch; embedding the node features and the context features in the superpatch graph through the GNN model; extracting the context features through the GNN model and calculating the diagnostic score; and calculating the contributions for each node, wherein a contribution between adjacent superpatches, including spatial information between the superpatches, is calculated to reflect a heterogeneous surrounding environment feature to the context feature.
6. The method of claim 1, further comprising: generating and displaying subgraphs by connecting between the superpatches with the edge using the superpatch as a node and additionally guiding at least one of contributions for each node and contributions for each edge; classifying and grouping the subgraphs according to the contributions for each node, and then connecting the classified and grouped subgraphs in the units of groups to generate the connected graphs, and averaging all the node features in the connected graph to define connected graph features; and normalizing and clustering all the connected graph features within the same group to extract biomarkers of each group.
7. The method of claim 6, wherein, in the step of defining the connected graph features, only graphs having a predetermined number of nodes or more among largest connected graphs within the same group are defined as the connected graph.
8. The method of claim 1, wherein the diagnostic information is generated by: detecting and notifying the risk in the WSI based on a connectivity analysis result of the connected graph; or stratifying a patient's risk grade according to a graphical analysis result of the connected graph.
9. An apparatus for analyzing pathology patterns of whole-slide images (WSI) using graph deep learning, the apparatus comprising: a WSI compression unit, comprising a processor and memory, configured to segment the WSI into a plurality of small patches, extract patch-level features using a pretrained model, and compress the patches into a superpatch graph by grouping spatially adjacent and feature-similar patches into superpatches, each represented as a node; a graph analysis unit, configured to embed node features and spatial context features of the superpatch graph using a graph neural network (GNN), wherein the context features include distance and angle between superpatches; a contribution analysis unit, configured to compute contribution scores for each node and edge using an integrated gradients method or another attribution method for interpretability; a graph grouping unit, configured to classify and group superpatches based on their contribution scores to identify connected subgraphs representing environmental graph biomarkers; a biomarker acquisition unit, configured to extract environmental graph biomarkers by analyzing spatial topology of high-contribution subgraphs; and a diagnostic information extraction unit, configured to generate and provide a diagnostic score, based on the identified graph biomarkers and their spatial distribution in the WSI.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
DETAILED DESCRIPTION
(15) Hereinafter, preferred embodiments of the present invention will be described in detail. When it is determined that a detailed description for any known art related to the present invention may obscure the gist of the present invention, the detailed description will be omitted. It should be understood that the singular expression includes the plural expression unless the context clearly indicates otherwise, and it will be further understood that the term comprise or have used in this specification, specifies the presence of stated features, steps, operations, components, parts, or a combination thereof, but does not preclude the presence or addition of one or more other features, numerals, steps, operations, components, parts, or a combination thereof. In addition, in performing the method or the manufacturing method, each of the processes constituting the method may occur differently from the specified order unless a specific order is explicitly described in context. That is, the respective steps may be performed in the same sequence as the described sequence, be performed at substantially the same time, or be performed in an opposite sequence to the described sequence.
(16) Since the present invention may be variously modified and have several exemplary embodiments, specific exemplary embodiments will be illustrated and be described in detail in a detailed description. However, it is to be understood that the present invention is not limited to a specific exemplary embodiment, but includes all modifications, equivalents, and substitutions without departing from the scope and spirit of the present invention.
(17) The technology disclosed in the present specification is not limited to implementation examples, but may be implemented in other forms. However, exemplary embodiments introduced herein are provided to make disclosed contents thorough and complete and sufficiently transfer the spirit of the present invention to those skilled in the art. In the drawing, in order to clearly express the components of each device, the size of the components, such as width or thickness, is shown somewhat enlarged. When describing the drawings as a whole, it has been described from the observer's point of view, and when one element is referred to as being located on top of another element, this includes all the meanings that one element may be located directly on another element, or additional elements may be interposed between them. In addition, those skilled in the art will be able to implement the spirit of the present invention in various other forms without departing from the technical spirit of the present invention. In addition, like reference numerals in a plurality of drawings denote elements that are substantially the same as each other.
(18) In this specification, the term and/or includes a combination of a plurality of recited items or any one of a plurality of recited items. In the present specification, A or B may include A, B, or both of A and B.
(19)
(20) As illustrated in
(21) As such, according to the present invention, it is possible to learn and analyze tumor-environmental context using graph deep learning (TEA graph) which is a GNN-based method of analyzing contextual histopathological features of gigapixel-sized WSI in a semi-supervised manner.
(22) This may compress and represent the WSI with the superpatch, reduces memory capacity for graph representation, and expand an image analysis unit from the existing units of patches to the whole area.
(23) By using all the WSI while maintaining a spatial relationship of each local feature, it is possible to extract the pathological context features through the GNN model, interpret the WSI with an attention score within an edge and an IG method, predict a risk of events such as death and metastasis, and analyze pathological context features associated with prognosis.
(24) In addition, experts may train on the WSI with only label information such as patient status without the need to manually define annotations for a region of interest, and may perform various extractions of interpretable histopathological prognostic markers. In addition, by analyzing risk-related features, it is possible to stratify patients' risk more clearly than the existing histological rating.
(25) Hereinafter, referring to
(26)
(27) WSI Compression Step (S10)
(28) S11: Pathological Feature Extraction
(29) First, the WSI is pre-processed with an Otsu threshold to filter out artifacts of the WSI. For example, when a tissue sample is prepared on a pathology glass slide, the WSI includes a tissue sample and a white background that is a glass surface of a pathology glass slide. Accordingly, a patch composed of an Otsu mask having a predetermined percentage (e.g., 75%) or more is selected, so only the tissue sample may be extracted from the WSI and the unnecessary white background may be removed.
(30) As illustrated in
(31) The pathological features are extracted through pre-trained EfficientNet. For the ccRCC's WSI, the pre-trained EfficientNet for pathology applications is further optimized by performing transfer learning on all the networks for cancer/normal classifiers without layer fixation. To this end, 2,000 small patch images of 256256 pixels are prepared for each tumor and normal class extracted from the WSI that is manually annotated by a pathologist. The transfer learning is an option for improving the performance of the TEA graph. The reason is that similar results may be obtained in the TCGA data set rather than the ccRCC type without the transfer learning.
(32) To define the similar patches, a cosine similarity and a spatial distance (pixel-level L2-norm distance between patches) of features extracted from the pre-trained EfficientNet representing a similarity between images (or similarity between omics) are considered.
(33) To maximize the compression of the similar patches, patches within two patch distances are compared for spatial correlation (up to 24 patches in a 55 patch, one patch distance equals to 256 pixels), then the cosine similarity of the patch feature extracted from the pre-trained EfficientNet is measured, and the similar patch is defined when the cosine similarity is lower than the defined threshold.
(34) S12: Generate Superpatch Graph
(35) When the similar patches are defined, an operation of sorting the patches according to the number of similar patches in each patch and then an operation of starting with the patch with the most similar patch and removing the similar patches belonging to the corresponding patch are sequentially performed to generate the superpatch.
(36) After each superpatch is set as a single node in the graph, when the distance between nodes is smaller than a preset distance (e.g., 5 patch distances), the edges between the nodes are generated in consideration of only the spatial distance between the patches, thereby generating the superpatch graph.
(37) S13: Extract Network Feature
(38) The node features for each superpatch are calculated by collecting and averaging, i.e. integrating, the features of small patches in units of superpatches, and the edge features including the geometric structure of the superpatch are calculated through the normalized spatial distance and angle for each patient between the two patches.
(39) GNN Analysis Step (S20)
(40) S21: Construct and Learn GNN Model
(41) In the present invention, for learning and analysis of histopathological context in WSI, GNN, which is a neural network optimized for graph structure processing, is used as illustrated in
(42) The GNN of the present invention selects GAT as a backbone model to ensure interpretability and maximize learning features. The GAT may differentiate weights of each node when aggregating neighboring features by learning the contributions for each edge. In this case, the contributions for each edge may be calculated as an attention score, etc., and is also used to interpret the importance of each connection between superpatches. Hereinafter, for convenience of description, a case in which the contributions for each edge are calculated through the attention score will be described as an example.
(43) In particular, the attention score calculation method is modified to include the geometric structure features of the edge features, and use a 3-layer GAT with two attention heads (100 dimensions per head) for each layer as the basic structure, which has a receptive field sufficient to capture the environmental features with the GNN. To represent the geometric structure features as learnable parameters, the normalized distances and angles are quantified to numbers 0 to 10 and create lookup tables with the learnable features for the distance and angle.
(44) In addition, to prevent overfitting, LayerNorm is used after activation of each layer, and a parametric rectified linear unit (PReLU) is used as a residual connection between an activation layer and each layer to improve learning features and stability.
(45) For risk score calculation, a combination layer that sums outputs of each GAT layer, a two-layer MLP (800 and 400 dimensions) for post-processing and a final fully connected layer are additionally provided to generate and output a single risk score as the output of the WSI.
(46) In the present invention, the MLP and GAT are implemented using PyTorch and PyTorch Geometric, but it is not necessary to be limited thereto.
(47) S22: Embed Node Feature
(48) To allow the WSI to be processed as end-to-end learning, preprocessing of using a three-layer MLP (e.g., 800, 400, and 200 dimensions) to reduce vector dimensions (e.g., 200 dimensions) of node features for each superpatch is performed.
(49) S23: Embed Context Feature
(50) The outputs h.sub.i of the preprocessing MLP are collected to create inputs h.
h={h.sub.1,h.sub.2, . . . ,h.sub.N},h.sub.i.sup.F.sup.
(51) In this case, N denotes the number of superpatches (nodes), and F.sub.n denotes the number of features in the last layer of the preprocessing MLP.
d={d.sub.11,d.sub.12, . . . ,d.sub.NN-1,d.sub.NN},d.sub.ij.sup.F.sup.
a={a.sub.11,a.sub.12, . . . ,a.sub.NN-1,a.sub.NN},a.sub.ij.sup.F.sup.
(52) In this case, distance d and angle a are included as the edge features of the superpatch graph as the learnable parameters, and F.sub.e denotes the number of parameters that can be learned to include the distance and angle features. That is, in the present invention, the distance d and angle a are included in the edge feature as the spatial information between the superpatches, but it is natural that other information besides these information may be additionally used if necessary.
(53) In order to reflect the surrounding environment features that are heterogeneous to the context features, the attention scores .sub.ij for each edge are calculated.
(54) In addition, since it is important to consider the spatial location of each histopathological feature, the attention scores are calculated including the distance and angle features to ensure that location information is transferred between each superpatch.
(55)
(56) In this case, W.sub.s, W.sub.r, W.sub.d, W.sub.a denote initial linear transformations of a source node, a destination node, distance and angle features shared by each layer, a.sub.s, a.sub.r, a.sub.d, a.sub.a denote single-layer feed-forward neural networks for the source node, the destination node, coefficients of the distance and angle features, and Ni denotes a neighboring superpatch of superpatch i in the graph.
(57) Then, the normalized attention coefficient is used to calculate a linear combination of related features. For sufficient representation, the PReLU is used as a non-linear function.
(58)
(59) In this case, h.sub.i denotes the final context feature.
(60) S24: Calculate Risk Score
(61) In the present invention, survival prediction is applied to the GAT using a Cox regression loss as a target. The final fully connected layer of the GNN outputs predicted risks R=.sup.TX associated with the input graph, and these risks are input to negative partial log-likelihood of a Cox proportional hazards regression which is the loss function of the model.
(62)
(63) In this case, U denotes a list of all patients, and .sub.i denotes a list of patients whose survival time is shorter than that of patient i.
(64) For reference, the GNN model updates hyperparameters to minimize the loss through backpropagation using an Adam optimizer. In this case, it is most preferable to apply the basic parameters of the Adam optimizer with a weight decay coefficient of 0.0005 and use a learning rate scheduler that reduces a learning rate by 0.95 times every steps for faster convergence of the model, but these specific implementation methods may be adjusted in various ways in the future.
(65) S24: Calculate IG Value
(66) Each node has a node feature V of a pre-trained CNN model. For the contribution analysis for each node, a baseline is defined, and the node features extracted from the baseline features needs to be interpolated with the original input node feature V. In this case, the contributions for each node may be calculated by the IG value, etc. Hereinafter, for convenience of description, a case in which the contributions for each node are calculated through the IG value will be described as an example.
(67) The IG value is calculated using an integrated gradients function of a captum (0.4.0) module, and the function uses the node features having zero for the baseline feature.
(68) The IG value is calculated as follows.
(69)
(70) In this case, V.sup.i.sub.k denotes an i-th node, k denotes a k-th node feature, and m denotes an interpolation step.
(71) After calculating the IG of each node for comparison, all the IG values of the entire WSI are normalized using min-max normalization. A node with a large IG value means that the node has a significant influence on the prediction output (risk). The sign of the IG value indicates the direction of influence, and when the sign of the IG value is negative, it means that a direction in which the node affects the risk is a direction in which the risk decreases.
(72) For validation check, the IG values for the entire data set are collected, and the IG values are grouped into the high 10%, the mid 10%, and the low 10%. These groups are used to characterize the histopathological features of each group.
(73) The median value of each patient's superpatch is selected as the threshold, the IG group is used to divide patients into two groups, and Kaplan-Meier survival analysis is performed to determine the prognostic features of each IG group.
(74) In addition, pairs of high attention (>0.9) and low attention (<0.1) superpatches are extracted, and the correlation between the features of each node obtained from the pre-trained CNN is measured. The fraction of superpatches of the high correlation (>0.8) and the low correlation (<0.2) pairs within the entire high and low attention pairs is calculated. In addition, the pairs of the high and low attention superpatches for each of the high, mid, and low IG groups defined above are extracted. The median feature correlation values within all the high and low attention pairs of each IG group are calculated.
(75) Biomarker Acquisition Step (S30)
(76) S31: Visualize Subgraph
(77) In the present invention, the node represents the superpatch and the edge generates and displays the subgraph representing the connectivity between the superpatches.
(78) In addition, color labels are used to visualize together interpretation metrics such as at least one of the IG and the edge attention. The node color represents the IG value, in which a red node means a high IG value superpatch, and a blue node means a low IG value superpatch. Similarly, the edge color represents the attention score, in which a red edge means a higher weight and a blue edge means a lower weight.
(79) S32: Filter Node
(80) Two models are trained based on the same WSI but on two different clinical data (e.g., cancer-specific survival and metastasis-free survival data), and the IG values are used through two models to observe the risk-related mortality and metastasis area in the same patient.
(81) To extract pathological features correlated with poor prognosis in both clinical data sets, nodes having an IG value difference of less than 0.1 between a survival event and a metastasis event are filtered out. On the other hand, the nodes having the IG value difference of more than 0.5 between the survival event and the transition event are filtered out, which means that the predictive effect of nodes for each event is diametrically opposite.
(82) S33: Extract Environmental Graph Biomarkers Using Connected Graph
(83) After calculating the IG values across the entire data set, the IG values of the high 10%, the mid 10%, and the low 10% as targets for context graph biomarker extraction are grouped.
(84) Among the largest graphs connected within the same group (high, mid, low), only graphs with a preset number of nodes (e.g., 5) or more are defined as the connected graphs, and all the node features of the connected graph are averaged to define the connected graph feature. In the case of the node features, two features, a morphological feature from the pre-trained CNN model and a graphic feature from the trained TEA graph, are connected and used. The graph cluster that represents the largest number of distinguishable clusters is selected.
(85) The features of the connected graphs are normalized in units of groups and the clustering is performed to extract the environmental graph biomarkers of each group. In this case, the clustering may be performed through k-means clustering, but is not necessarily limited thereto.
(86) After visualizing the clustering results using a t-SNE plot, the number of subgraphs in each graph cluster for all patients is counted, the median count value of each graph cluster is measured, and the high count group of each graph cluster is defined as a patient having a higher count value than the median count value of each graph cluster. The low count group represents patients whose count value of each graph cluster is lower than the median value.
(87) The Kapalan-Meier analysis is performed with the high count and lower count groups, and a Kaplan-Meier plot of each cluster is acquired. The differences between regions are measured based on Kaplan-Meier plots of the high and low count groups to evaluate the importance of each subgraph cluster as a prognostic biomarker (P value is calculated with a log-rank test).
(88) All the survival analyses are performed using Python's lifelines package. A cluster having a P value of less than 0.01 is defined as a cluster that can be distinguished from the high/low groups. The graph cluster number with the largest number of identifiable clusters in the high/low group is selected.
(89) Diagnostic Information Extraction Step (S40)
(90) S41: Analyze Risk Area
(91) For analysis for each patch, first, 100 subgraphs are sampled from each graph cluster. Patch-level indexes are created through k-means clustering with patch morphological features from the sampled subgraphs. The patch-level cluster index is assigned to each node of the subgraph in the selected subgraph cluster. Then, each patch-level cluster index pair connected to the edge of the subgraph is counted and normalized with the total number of edges in the subgraph. The patch clusters having high connectivity are selected from the subgraph clusters and the pathological features of the patch-level cluster pairs are interpreted based on the pathological features of each patch-level cluster.
(92) NetworkX is used to visualize each connected graph and labels are designated to the nodes with superpatch labels to characterize label connection patterns of each connected graph. The risk area within the WSI based on the label connection patterns is detected and notified.
(93) S36: Stratify Risk Rating
(94) In order to verify whether the selected subgraph has prognostic power, after calculating the correlation between the graph characteristics of the selected subgraph and other subgraphs belonging to the same graph cluster, the subgraph showing the high correlation (>0.9) between subgraphs having similar pathological characteristics to the selected subgraph is extracted.
(95) In addition, after counting the number of subgraphs that have the high correlation with the selected subgraphs for each patient, patients whose count value is higher than the median value among all patients are defined as the high count group, and patients whose count value is lower than the median value among all patients is defined as a sub-count group.
(96) The Kaplan-Meier plot is acquired in the high and low count groups defined using the count values of the above-described highly correlated subgraph, and based on the Kaplan-Meier plot, the patient's risk rating is stratified into survival, progression, and metastasis.
(97) That is, the present invention enables at least one of the WSI risk area analysis operation and patient risk rating stratification operation is performed based on the environmental graph biomarkers for each group, and the corresponding diagnostic information may be generated and provided.
(98) In addition, the correlation between biomarkers and drug treatment prognosis, the correlation between biomarkers and mutation prediction results, the correlation between biomarkers and gene expression level prediction, and the like may be additionally derived using the same principle as above, and it is natural that various diagnostic tasks such as drug treatment prognosis prediction, mutation prediction, and gene expression level prediction may be additionally performed based on the extracted correlations.
(99)
(100) As illustrated in
(101) The apparatus 100 for analyzing pathology patterns configured to implement one or more embodiments described above may include a personal computer, a server computer, a handheld or laptop device, a mobile device (a mobile phone, a PDA, a media player, or the like), a multiprocessor system, a consumer electronic device, a mini computer, a mainframe computer, a computing device including any system or device described above, and the like, but is not limited thereto.
(102) In addition, the apparatus 100 for analyzing pathology patterns may further include input device(s) and output device(s). Here, the input device(s) may include, for example, a keyboard, a mouse, a pen, an audio input device, a touch input device, an infrared camera, a video input device, any other input device, or the like. In addition, the output device(s) may include, for example, one or more displays, speakers, printers, any other output devices, or the like. In addition, an input device or an output device included in another computing device may be used as the input device(s) or the output device(s).
(103) In addition, it may include a communication module enabling communication with other devices. Here, the communications module may include a modem, a network interface card (NIC), an integrated network interface, a radio frequency transmitter/receiver, an infrared port, a universal serial bus (USB) connection, or another interface for connecting to another computing device. Also, the communication module may include a wired connection or a wireless connection.
(104) The respective components of the apparatus 100 for analyzing pathology patterns described above may be interconnected through various interconnections (for example, a peripheral component interconnect (PCI), a USB, a firmware (IEEE 1394), an optical bus structure, and the like) such as a bus, and the like, or be interconnected by a network 1200.
(105) Terms component, module, system, interface, and the like, used in the present disclosure generally refer to a computer related entity, which is hardware, a combination of hardware and software, software, or software that is being executed. For example, the component may be a process that is being executed on a processor, the processor, an object, an executable structure, an executing thread, a program and/or a computer, but is not limited thereto. For example, both of an application that is being driven on a controller and the controller may be a component. One or more components may exist in the process and/or the executing thread, and the component may be localized on one computer or be distributed between two or more computers.
(106)
(107) Five-fold cross-validation is used to evaluate patient-level risk prediction performance of TEA graphs for three different events (survival, progression, and metastasis). The WSI data set of 831 ccRCC patients at Seoul National University Hospital (SNUH) was randomly split into a training set (80%), a validation set (10%), and a test set (10%). Models were trained to predict risk values for patients in all training WSIs and evaluated with test WSIs.
(108) Indexes (C-indexes) of various histopathological data (WHO/ISUP ratings, TNM stages, and other metadata described in methods) provided by pathologists with predicted risk values obtained from TEA graphs were compared.
(109) Although the TEA graph showed limited performance compared to the TNM stage, which includes information that could not be inferred from tissue images, such as metastasis status, it can be seen that the TEA graph performed better than the WHO/ISUP rating for all three events and reached accuracy of 85% in predicting the risk of death of the ccRCC patients.
(110) Combined with data provided by a pathologist, the TEA graph predicted risk values achieved better performance than all other data provided, which may validate the usefulness of the TEA graph. In addition, the TEA graph predicted risk values achieved the highest risk ratios, and showed that the predicted risk scores reflected each patient's probability of event occurrence compared to data provided by other pathologists for all events.
(111) The performance of the TEA graph for patient risk stratification by quantifying THE predicted risk values into four separate groups to generate predictive histopathological ratings was evaluated. The predicted histopathological rating showed improved stratification of patients' risk compared to the WHO/ISUP rating, and in particular, stratified early-stage patients for both survival and metastasis events.
(112) In the present invention, the TEA graph was additionally validated with an external data set and compared its performance with other multi-instance or contextual feature learning models. The WSI data sets of renal (KIRC), breast (BRCA), lung (LUAD and NLST), and cervical (UCEC) patients were trained and evaluated, and compared to the existing models to confirm the usefulness of the TEA graph predictive risk score. In particular, the TEA graph was shown to outperform other contextual feature learning models that do not use position embedded edge features for model training.
(113) Special portions of contents of the present invention have been described in detail hereinabove, and it will be obvious to those skilled in the art that this detailed description is only an exemplary embodiment and the scope of the present invention is not limited by this detailed description. Therefore, the substantial scope of the present invention will be defined by the claims and equivalents thereof.