METHOD AND SYSTEM FOR LEARNING NOVEL RELATIONSHIPS AMONG VARIOUS BIOLOGICAL ENTITIES
20230117881 · 2023-04-20
Inventors
Cpc classification
G16H50/70
PHYSICS
G16H50/20
PHYSICS
G06N3/0985
PHYSICS
International classification
Abstract
A computer-implemented method for learning novel relationships among various entities, including biological entities such as chemicals, proteins, and diseases, includes establishing a knowledge graph wherein each of the entities is represented as a node and each relationship between the entities is represented as an edge between the respective nodes, and annotating entities in the knowledge graph with objects of one or more data modalities. A neural network system is trained with the knowledge graph, wherein the neural network system treats the knowledge graph and the objects of a respective one of the data modalities in a unified manner by jointly learning embeddings of the nodes from the knowledge graph and embeddings of the objects of the respective one of the data modalities. The learned embeddings are used for identifying novel relationships among the entities.
Claims
1. A computer-implemented method of learning novel relationships among various entities, including biological entities such as chemicals, proteins, and diseases, the method comprising: establishing a knowledge graph wherein each of the entities is represented as a node and each relationship between the entities is represented as an edge between the respective nodes; annotating entities in the knowledge graph with objects of one or more data modalities; training a neural network system with the knowledge graph, wherein the neural network system treats the knowledge graph and the objects of a respective one of the data modalities in a unified manner by jointly learning embeddings of the nodes from the knowledge graph and embeddings of the objects of the respective one of the data modalities; and using the learned embeddings for identifying novel relationships among the entities.
2. The method according to claim 1, wherein the one or more data modalities include objects in form of biomedical documents describing details of the entities and/or in form of images.
3. The method according to claim 1, wherein a machine learning model maintained by the neural network system is trained with a single sample at a time, wherein a sample is provided as a triple composed of a head entity, a tail entity and a corresponding relation type between the head entity and the tail entity, and wherein, for a particular head entity and a particular relation type, the tail entities are grouped into a positive subset and into a negative subset in such a way that for each tail entity in the positive subset it holds that a respective triple exists in the knowledge graph, while for each tail entity in the negative subset it holds that no respective triple exists in the knowledge graph.
4. The method according to claim lany of claims 1, wherein the neural network system is configured to minimize for a particular positive sample a loss function that calculates a Euclidean distance between a distance between the embeddings of the nodes from the knowledge graph for the positive sample and a center of the respective negative samples and a distance between the embeddings from the object of a particular of the data modalities for the positive sample and a center of the negative samples.
5. The method according to claim 4, wherein the loss function is calculated individually for each pair of utilized data modalities.
6. The method according to claim 1, wherein the neural network system measures a prediction accuracy of a respective one of a trained machine learning model based on learned weights of the embeddings.
7. The method according to claim lany of claims 1, further comprising introducing a hyperparameter for controlling a tradeoff between a prediction accuracy and the unified embeddings.
8. The method according to claim 1, further comprising: selecting a particular disease; using the neural network system to predict for selected disease relationships of a form ‘gene associated with disease’; ranking predicted genes according to a likelihood of the respective one of the predicted genes to be associated with the selected disease; and selecting a predefined number of the top-ranked genes as candidates for a knockdown experiment.
9. The method according to claim 1, further comprising: selecting a particular disease; using the neural network system to predict for the selected disease relationships of a form ‘chemical treats disease’; ranking predicted chemicals according to a likelihood of a respective one of the predicted chemicals to treat the selected disease; and selecting a predefined number of the top-ranked chemicals as candidates for personalized drug development.
10. A computer system for learning novel relationships among various entities, including biological entities such as chemicals, proteins, and diseases, the computer system comprising memory and one or more processors which, alone or in combination, are configured to provide for execution of a method comprising: establishing a knowledge graph wherein each of the entities is represented as a node and each relationship between the entities is represented as an edge between the respective nodes; annotating entities in the knowledge graph with objects of one or more data modalities; training a neural network system with the knowledge graph, wherein the neural network system treats the knowledge graph and the objects of a respective one of the data modalities in a unified manner by jointly learning embeddings of the nodes from the knowledge graph and embeddings of the objects of the respective one of the data modalities; and using the learned embeddings for identifying novel relationships among the entities.
11. The computer system according to claim 10, wherein the neural network system is configured to minimize for a particular positive sample a loss function that calculates a Euclidean distance between a distance between the embeddings of the nodes from the knowledge graph for the positive sample and a center of the respective negative samples and a distance between the embeddings from the object of a particular of the data modalities for the positive sample and a center of the negative samples.
12. The computer system according to claim 11, wherein the loss function is calculated individually for each pair of utilized data modalities.
13. The computer system according to claim 10, wherein the neural network system measures a prediction accuracy of a respective one of a trained machine learning model based on learned weights of the embeddings.
14. The computer system according to claim 10, further comprising a biomedical documents mining component that is configured to prepare biomedical documents based on a common vocabulary and to generate a set of biomedical documents, where each element of the set is a multiset of tokens from the vocabulary.
15. A non-transitory computer-readable medium having instructions thereon, which, upon execution by one or more processors, alone or in combination, and using memory, provides for execution of a method comprising: establishing a knowledge graph wherein each of the entities is represented as a node and each relationship between the entities is represented as an edge between the respective nodes; annotating entities in the knowledge graph with objects of one or more data modalities; training a neural network system with the knowledge graph, wherein the neural network system treats the knowledge graph and the objects of a respective one of the data modalities in a unified manner by jointly learning embeddings of the nodes from the knowledge graph and embeddings of the objects of the respective one of the data modalities; and using the learned embeddings for identifying novel relationships among the entities.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] Subject matter of the present disclosure will be described in even greater detail below based on the exemplary figures. All features described and/or illustrated herein can be used alone or combined in different combinations. The features and advantages of various embodiments will become apparent by reading the following detailed description with reference to the attached drawings, which illustrate the following:
[0012]
[0013]
DETAILED DESCRIPTION
[0014] The present invention improves and further develops a method and system of the initially described type for learning novel relationships among various entities in such a way that the view of the knowledge graph is improved.
[0015] In an embodiment, the present invention provides a computer-implemented method of learning novel relationships among various entities, in particular biological entities such as chemicals, proteins, and diseases, the method comprising:
[0016] establishing a knowledge graph wherein each of the entities is represented as a node and each relationship between the entities is represented as an edge between the respective nodes, annotating entities in the knowledge graph with objects of one or more data modalities, training a neural network system with the knowledge graph, wherein the neural network system treats the knowledge graph and the objects of a respective one of the data modalities in a unified manner by jointly learning embeddings of the nodes from the knowledge graph and embeddings of the objects of the respective one of the data modalities, and using the learned embeddings for identifying novel relationships among the entities.
[0017] According to the invention it has first been recognized that the above objective can be accomplished by a method that learns consistent representations of multiple data modalities, in particular biological entities, such as proteins and diseases, based on a knowledge graph, KG, and any additional data modality, like structured annotations and free text describing the entities. These representations may then be used to identify novel relationships, e.g., proteins that are associated with diseases. The novel relationships could be used to prioritize protein targets for new drugs. In contrast to prior art approaches, embodiments of the invention explicitly incorporate the consistency of the representations into the training process. In particular, embodiments of the invention consider the KG structure and documents about the entities in a unified manner. More specifically, uniting the embeddings and learning to predict true relationships is performed in a single step, i.e. the embeddings and the prediction model are learned jointly. This ensures that the embeddings are optimized for prediction.
[0018] Embodiments of the present invention relate to a method and system for learning novel relationships among the various biological entities, like proteins, drugs, and diseases, adopting a network-based approach in which each entity is represented as a node in a graph and each relationship is represented as an edge between the respective entities, wherein different relationship types are represented with different edge types. In the context of the present disclosure such network is referred to as a knowledge graph (KG). Learning novel relationships is then equivalent to predicting “missing” edges from the KG.
[0019] An obvious approach to combine the KG structure and documents (which is present in the literature) is to train independent ML models on the KG and documents and then combine them in some post hoc manner. In the context of neural network embeddings, this corresponds to learning one embedding from the KG structure and one from the documents for each entity; however, even when trained jointly and end-to-end (as done in previous work), these two embeddings are still likely to be completely different for each entity. That is, the embeddings are not unified, but, rather, completely independent.
[0020] To address this issue, embodiments of the invention relate to a link prediction system that implements a neural network learning strategy which combines the KG structure with the documents in a unified framework. In contrast to the “obvious” approach, both embeddings are forced to be similar. The main advantage compared to the obvious approach is that this “pulls together” the embeddings of entities which are distant in the KG but have similar documents. Likewise, it “pushes apart” embeddings of genes that are close in the KG but have very different documents. Thus, novel relationships are predicted using a unified view of the KG and documents. Consequently, these embodiments come along with the advantage that the neural network system learns embeddings of entities which unite the KG structure and additional information from the documents. In particular, the neural network system can learn similarities of documents based on the context of their associated entities, regardless of the syntactic or semantic overlap of their vocabulary.
[0021] In an embodiment, the present invention provides a method for identifying new candidate targets for drugs by predicting genes associated with diseases.
[0022] In another embodiment, the present invention provides a method for drug repurposing, in which an existing drug is used to treat a new disease, by predicting ‘drug—treats—disease’ edges. This is particularly relevant for immune checkpoint inhibitors (ICIs), which have been shown to be very effective for some disease but have severe side effects for others. ICIs are commonly used in combination with neoantigen-based personalized cancer vaccines.
[0023] There are several ways how to design and further develop the teaching of the present invention in an advantageous way. To this end it is to be referred to the dependent claims on the one hand and to the following explanation of preferred embodiments of the invention by way of example, illustrated by the figure on the other hand. In connection with the explanation of the preferred embodiments of the invention by the aid of the figure, generally preferred embodiments and further developments of the teaching will be explained.
[0024] In the following description of preferred embodiments of the present invention, the following notation will be used:
TABLE-US-00001 Parameter Description G Knowledge Graph V Set of nodes E Set of edges e.sup.t Edge Types D Set of documents d Element of D (a document) v Element of V (a node) G.sub.B Knowledge Graph with Documents h “Head” which occurs in V t “Tail” which occurs in V r “Relation” which occurs in E V.sub.p Set of all protein nodes V.sub.p.sup.+ Set of proteins that are associated with a certain disease V.sub.p.sup.− All proteins which are not part of V.sub.p.sup.+ D.sub.p.sup.+ Set of documents which describe the proteins in V.sub.p.sup.+ D.sub.p.sup.− Set of documents which describe the proteins in V.sub.p.sup.− v.sub.j Element in V.sub.p.sup.+ d.sub.j Element in D.sub.p.sup.+ x.sub.s Training sample which consists of nodes and documents (positive and negative samples) V.sub.s.sup.− A subset of V.sub.p.sup.− D.sub.s.sup.− A subset of D.sub.p.sup.− Emb.sub.U Embedding function for positive elements Emb.sub.U.sup.NEG Embedding function of negative elements Emb.sup.R Embedding function of the relation types x.sub.i Placeholder for v.sub.j or d.sub.j x.sub.j Placeholder for V.sub.s.sup.− or D.sub.s.sup.− U Placeholder for V or D r.sub.k A particular relation type
[0025]
Biomedical Knowledge Graph
[0026] According to an embodiment of the invention, the biomedical knowledge graph 100, denoted herein G=(V, E), may consist of a set of nodes 110, denoted V, and a set of edges 120, denoted E. Each node Vin the biomedical knowledge graph 100 corresponds to one entity, such as a chemical, a protein, or a disease. The edges E characterize the known relationships between these entities.
[0027] In an embodiment, the construction of the knowledge graph G may be performed by manually crafting the graph by considering various structured data sources. Specifically, according to an embodiment entities and their relationships may be extracted from various sources as follows:
[0028] from Di sGeNET : “disease—associated with—protein” relationships
[0029] from SIDER: “chemical—treats—disease” relationships
[0030] from STITCH:“chemical—reacts_with—chemical relationships
[0031] from STITCH:“chemical—interacts with—protein” relationships
[0032] from STRING “protein—interacts with—protein” relationships
[0033] According to an embodiment, a published approach may be followed to additionally create “disease—similar to—disease” relationships.
[0034] Further, as some data sources provide confidence values for the relationships, these may be taken into account to filter relationships with a low reliability since those are already predictions.
[0035] In total, then, according to the described embodiment, the knowledge graph 100 considers three types of entities: diseases, genes (proteins), chemicals (drugs). Further, it includes six relationship (edge) types. According to the notation used herein, e.sup.t refers to edge type t.
[0036] As will be appreciated by those skilled in the art, other entities, data sources, and relationship extraction strategies than the ones explicitly mentioned above could also be used to construct the KG 100.
[0037] In the following, the facts which are encoded by the graph in G are represented as a set of triples of the form (h, r, t), where h ∈ V and t ∈ V are the head and tail entities and r ∈ E is a relation type.
[0038] As an example,
Mining Biomedical Documents
[0039] According to embodiments of the invention, the entities in the KG 100 (which can be considered as objects of a first data modality) are annotated with objects 130 of one or more further data modalities. In the described embodiment, the entities in the KG 100 are annotated with objects 130 of one further data modality, namely biomedical documents. The biomedical documents describe the related entities in the KG 100 in more detail. For instance, proteins may be annotated with their functions or text from academic papers in which they are described. Hereinafter, this additional information is simply referred to as a document 140 for the respective entity.
[0040] According to embodiments, the documents 140 may be retrieved by manually or automatically crawling various data sources, including unstructured sources, like academic papers from PubMed, or structured data sources, such as ontologies like the Gene Ontology. Possible details that are described by the documents 140 are, for instance, location (e.g., in which tissue is a particular protein active), biochemical properties (e.g., the hydrophobicity or molecular weight of a drug), functional behavior (e.g., a description of the phenotypes of individuals with a disease), or higher-level description (such as the outcome of a clinical trial involving a particular drug and disease).
[0041] According to embodiments, all documents 140 may be prepared for further use based on a common vocabulary. As an example, word segmentation-based or frequency-based approaches could be used. Further, standard preprocessing steps like tokenization, lemmatization, and stemming could be used.
[0042] Additionally, each document 140 is associated with at least one of the entities from the KG 100. A document 140 can be associated with more than one entity, and each entity can be associated with more than one document 140. Many approaches can be used to assign documents 140 to entities. For structured data sources, entity identifiers are typically given; for unstructured sources, natural language processing approaches like named entity recognition could be used to extract entity identifiers from the text. The present invention is not limited with regard to the exact approach used for creating the biomedical documents 140 from the raw data sources and matching them to entities.
[0043] According to embodiments, the output of the mining biomedical documents component 150 may be a set of biomedical documents D, where each d E D is a multiset of tokens from the vocabulary. Further, ∀ d ∈ D, it holds that ∃ (v, d) ∈ (V, D) where (V, D) is a set of pairs. This is denoted as G.sub.B=((V, E), (V, D)). G.sub.B will be used as input for the neural network component 200.
[0044] According to embodiments of the invention, the following exemplary information may be extracted from various public sources in order to create the documents 140:
[0045] Gene Ontology: protein functions and locations in the cell (structured annotations)
[0046] Human Phenotype Ontology: phenotypes associated with diseases (structured annotations)
[0047] SIDER, OFFSIDES: chemical (drug) side effects (structured annotations)
[0048] MyGene.info: protein descriptions (free text)
[0049] DrugBank: drug descriptions (free text)
[0050] DisGeNET: disease mechanisms and associated mutations (free text)
[0051]
[0052] All current approaches known in prior art are limited by the network structure. In particular, they do not consider the structure of the KG 100 and documents 140 about the entities in a unified manner. Embodiments of the invention address this shortcoming with a machine learning-based approach in which the knowledge graph, KG, structure 100 on the one hand and documents 140 on the other hand are combined in a unified manner, as will be described in detail below.
[0053] Neural Network System
[0054] An embodiment of a neural network system 200, schematically illustrated in
Training
[0055] For illustration, the following description exemplarily refers to “proteins” and “diseases” as possible head and tail entities where the corresponding relation is “associated_with” (e.sup.d_aw_p)It should be noted that the remaining combinations of head and tail entities with corresponding relations are trained in a corresponding way.
[0056] For training a machine learning model, a particular disease (e.g. “Cardiomyopathy”) and a particular relation r.sub.k (e.g. “associated_with”) are kept constant as the head h and relation type r, respectively, and the proteins are grouped into V.sub.p.sup.+ and V.sub.p.sup.− where ∀v.sub.p.sup.+ ∈ V.sub.p.sup.+ it holds that a triple in the form of (Cardiomyopathy, associated_with, v.sub.p.sup.+) exists in KG. It holds that V.sub.p.sup.+ ∩V.sub.p.sup.−=Ø, V.sub.p.sup.+ ∪ V.sub.p.sup.− .Math. V.sub.p and V.sub.p .Math. V.
[0057] Analogously, D.sub.p.sup.+ and D.sub.p.sup.− describe the corresponding biomedical documents of the elements in V.sub.p.sup.+ and V.sub.p.sup.−. That is, every protein in V.sub.p.sup.+ is a correct tail for the triple, while every protein in V.sub.p.sup.− is an incorrect tail.
[0058] As indicated in
[0059] According to an embodiment, as shown at 206 for the nodes V and at 208 for the documents D, the embedding may be performed by means of the following embedding function:
where U ∈ {V, D}, θ.sub.i.sup.Emb.sup.
[0060] Second, as shown at 210 for the nodes V and at 212 for the documents D:negative sampling may be applied to randomly select V.sub.p.sup.− and D.sub.p.sup.−, where V.sub.s.sup.− .Math. V.sub.p.sup.− and D.sub.s.sup.− .Math. D.sup.−.
[0061] That is, these sets contain incorrect tails for the selected head and relation. Negative sampling may be performed in accordance with the approach described in Garcia-Duran, Alberto, and Mathias Niepert: “KB1rn: End-to-end learning of knowledge base representations with latent, relational, and numerical features”, in Proceedings of the 34th Conference on Uncertainty in Artificial Intelligence (2018), which is incorporated herein by way of reference.
[0062] Analogously to the embedding of the positive elements, the set of negative elements may be embedded as follows, as shown at 214 for the nodes V and at 216 for the documents D:
where Y.sub.s.sup.− is the set of negative samples (that is, V.sub.s.sup.− or D .sub.s.sup.−). The cardinalities of both V.sub.s.sup.− and D.sub.s.sup.− are equal to the number of negative samples. The number of negative samples is a hyperparameter of the method.
[0063] The embedding of the relation, as shown at 218, is defined as follows:
Emb.sup.R(r.sub.k)=θ.sub.k.sup.Emb.sup.
where k is the index of a particular relationship type. (According to the exemplary embodiment described herein, there are six relationship types.)
[0064] More concretely, Emb.sub.v is a neural network which gives an embedding for each entity in the KG, while Emb.sup.R is a neural network which gives an embedding for each relationship type. Additionally, Emb.sub.p is a neural network which gives an embedding for each token in the vocabulary, and the embedding for a document is simply the average embedding of all of its tokens. Emb.sub.U.sup.NEG is a simple function that gives the average embedding of all entities passed to it.
Learning Unified Embeddings
[0065] According to embodiments of the present invention, the neural network system 200 is configured to calculate the Euclidean distance between the embedding of the positive sample x.sub.j and the center of the negative samples Y.sub.s.sup.−:
distance.sub.U(x.sub.j,Y.sub.s.sup.−)=∥Emb.sub.U(x.sub.j) Emb.sub.U.sup.NEG (Y.sub.s.sup.−)∥.sup.2
as shown at 220 for the nodes V and at 222 for the documents D.
[0066] Further, as shown at 224, the distance between distance.sub.v and distance.sub.D is calculated as L.sub.offset:
L.sub.offset=∥distance.sub.v(v.sub.j, V.sub.s.sup.−)−distance.sub.D(d.sub.j,D.sub.s.sup.−)∥
[0067] That is, distance.sub.v(v.sub.j, V.sub.s.sup.−) gives the distance between the embedding from the KG for the positive sample and the center of the negative ones, while distance.sub.D (d.sub.j, D.sub.s.sup.−) gives the distance between the embedding from the document for the positive sample and the center of the negative ones.
[0068] It should be noted that in standard approaches, these distances may differ greatly; in such cases, L.sub.offset will be large. In contrast, in accordance with embodiments of the present invention, the neural network system 200 aims at minimizing the L.sub.offset. Hence, the embedding distance between v.sub.j and d.sub.j to the respective negative embedded samples V.sub.s.sup.− and D.sub.s.sup.− will be minimized. As a result, the model explicitly accounts for similarity in the KG when learning embeddings for the documents and vice versa. That is, the model explicitly unifies the embeddings of the KG structure and the associated documents, which distinguishes the solution according to the present invention from other known prior art methods.
[0069] It is should be noted that in accordance with the above implementation, the model adjusts the learned embeddings of the entities to find a balance between the evidence in the KG and the documents.
Learning Accurate Relation Predictions
[0070] According to embodiments of the invention, in addition to L.sub.offset, the neural network component 200 may also keep track of the prediction accuracy of the model, that is, it measures whether learned weights of the embedding functions are reliable for accurately predicting relations. In particular, the neural network component 200 may use the learned weights of the embedding layers (Emb.sub.V, Emb.sub.D, and Emb.sup.R) to predict the relationships known to be in the KG 100.
[0071] More specifically, as already mentioned above, embodiments of the invention aim at predicting the true tail of a triple when the head and relationship type are fixed. That is, assuming, for example a particular disease as a given tail entity h (such as “Cardiomyopathy” in accordance with the example given above) and a particular relation type r.sub.k (“associated_with”), the aim is to predict a correct tail entity t for the triple (h, r, t). Hereinafter, the node and document associated with the head are referred to as v.sub.head and d.sub.head, respectively. The prediction result is calculated by a function denoted L.sub.predict.
[0072] In contrast to L.sub.offset, L.sub.predict does not average the embeddings of the negative samples (from V.sub.s.sup.− and D.sub.s.sup.−), but rather compares all of them to the correct tail (v.sub.j and d.sub.j) to measure the predictive performance. In particular, according to an embodiment, the embeddings of the fixed head and the fixed relationship type may be combined (multiplied element-wise, “.Math.”) to group the nodes (as shown at 226) as well as the documents (as shown at 228) in the embedding space by the relation type:
Group.sub.U(x.sub.head, r.sub.k)=Emb.sub.U(x.sub.head).Math.Emb.sup.R (r.sub.k)
[0073] Finally, according to embodiments, the score of each candidate tail (that is, j as well as each negative sample) is calculated as the dot product of its embedding with Group.sub.u(x.sub.head, r.sub.k) as follows (as shown at 230 for the nodes and at 232 for the documents):
Predict.sub.u(x.sub.head, r.sub.k, x.sub.j)=Group.sub.u(x.sub.head, r.sub.k) Emb.sub.u(x.sub.1)
[0074] Each score is a measure of how likely the respective entity correctly completes the triple (h, r, ?). In the optimal case, the score of the negative samples should be 0 while the score of a true tail should be 1.
[0075] As shown in
[0076] nodes and documents, and the resulting vectors are combined (added element-wise) into a score as follows (shown at 234):
Predict(head, r.sub.k, j)=Predict.sub.v(v.sub.head, r.sub.k, v.sub.j)+Predict.sub.D(d.sub.head, r.sub.k, d.sub.i)
[0077] This score is computed for all negative samples and the true tail. Since the exemplary embodiment discussed herein, considers two modalities, the combined score for the true tail should be 2, while the combined score for the negative samples should still be 0. In more general terms, i.e. in case of considering n different modalities, the above equation for Predict(head, r.sub.k, j) would be composed of n summands and the combined score for the true tail would be n. The calculation of L.sub.offset is exponential in the number of included modalities.
[0078] In the exemplary embodiment of
where y.sub.i.sup.− is the i.sup.th negative sample.
[0079] Finally, according to embodiments of the present invention, both loss values may be combined to measure the performance of the respective machine learning model, for example as follows:
L=∝ L.sub.predict+(1−∝)L.sub.offset,
where ∝ ∈ [0,1] is a hyperparameter, which controls the tradeoff between accuracy and the unified embeddings. Standard backpropagation algorithms may then be used to update the parameters of the Emb.sub.v, Emb.sup.R and Emb.sub.D so that the predictions become more accurate while still respecting the unification of the KG and document embeddings; that is, the parameters are updated so that L is minimized.
[0080] As an example, reference is made to
Ranking Novel Relationships
[0081] According to embodiments of the invention, after training the model, the parameters of the embedding functions Emb.sub.v, Emb.sup.R and Emb.sub.D are set such that true triples result in a high Predict(.Math.) value, while incorrect triples result in low values.
[0082] That is, the trained model can answer queries of the following forms: (h, r, ?), (h, ?, t), and (?, r, t). Specifically, when given two of the elements, Predict(.Math.) is calculated for all possible values of the missing element of the triple. The respective scores are used to give a ranking from the most-likely (highest-value) to least-likely (lowest) completions of the triple. For cases in which correct completions of the query are known, such rankings can be evaluated using standard information retrieval metrics such as area under the receiver operating characteristic curve, precision@k, and area under the precision-recall curve.
[0083] As already mentioned above, the pipeline outlined in
L=xL.sub.predict+y.sub.1L.sub.offset. . . +Y.sub.nL.sub.offset
where
m represent the number of data modalities, and x+Σ.sub.i=1.sup.n y.sub.i=1.
[0084] In an embodiment, the present invention relates to a method for identifying disease-gene associations, which may be applied, e.g., in connection with automating CRISPR-Cas9 knockdown experiments (as described, for instance, in Zhou, Y., Zhu, S., Cai, C. et al. High-throughput screening of a CRISPR/Cas9 library for functional genomics in human cells. Nature 509,487-491 (2014). https://doi.org/10.1038/nature13166) to understand the effect of specific genes on diseases. Briefly, CRISPR-Cas9 knockdown experiments use a guide RNA which targets a specific gene; after applying the CRISPR-Cas9 compound with a particular guide RNA, the targeted gene no longer functions. (It is “knocked down.”) The goal of this system is to identify the genes most relevant for a disease by prioritizing genes for knockdown experiments. (Presumably, these genes are then good candidates for drug discovery, but we that idea is not further pursued in the present disclosure.)
[0085] In this case, nodes and edges in the knowledge graph correspond to genes, chemicals, and diseases, as already described and as shown in
[0086] First, a disease of interest is selected. A cell line must be available which is similar to that disease; for example, KG-1 and HL-60 are cell lines for leukemia (for reference, see Koeffler H P, Golde D W. Human myeloid leukemia cell lines: a review. Blood, 1980 September;56(3):344-50, available under https://www.ncbi.nlm.nih.gov/pubmed/6996765). Second, the robot distributes Cas9 into all of the wells of the plate. Then, a method of learning novel relationships among various biological entities according to embodiments of the present invention may be used to identify the genes mostly likely to be associated with the disease. That is, the method predicts “gene associated with disease” relationships for the selected disease and takes the top-ranked genes. The number of selected genes may be determined by the number of wells on the plate. Each gene is assigned to one or more wells on the plate. The robot automatically selects the respective guide RNAs from storage and places them in the microtiter plate in known wells in order to co-incubate with Cas9. After sufficient time, the cells from the cell line are combined into the system using an appropriate technique, such as electroporation (https://www.takarabio.com/learning-centers/stem-cell-research/technical-notes/gene-editing-in-hips-cells/generating-clonal-hips-cell-lines-deficient-in-cd81).
[0087] Finally, another assay may be used to evaluate the effect of knocking down each gene in the cell line. For example, Konstantinos Tzelepis, et al. A CRISPR Dropout Screen
[0088] Identifies Genetic Vulnerabilities and Therapeutic Targets in Acute Myeloid Leukemia. Cell Reports. Volume 17, Issue 4, 18 October 2016, pages 1193-1205vuse RNA sequencing to identify genetic pathways which function differently in the knocked down cells compared to the unaltered cell line.
[0089] In an alternative embodiment, the present invention relates to a method for identifying disease-gene associations, which may be applied, e.g., in connection with a personalized medicine system for developing personalized drugs. The goal of such a system is to select the best chemical to treat a disease for a particular patient. In this case, nodes in the knowledge graph (KG) again correspond to genes, chemicals, and diseases, as shown in
[0090] First, a patient is diagnosed with a particular disease. Second, a biopsy is taken from the patient; depending on the type of disease, the biopsy may be liquid (for example, for leukemias) or solid (for example, for ovarian cancer). Relevant cells are extracted from the biopsy using standard approaches. For example, a standard protocol (for reference, see Panda, S. K. and Ravindran, B. (2013). Isolation of Human PBMCs. Bio-protocol 3(3): e323. DOI: 10.21769/BioProtoc.323) is available to extract immune cells from blood. The robot distributes the cells into wells on one or more microtiter plates. Then, a method of learning novel relationships among various biological entities according to embodiments of the present invention may be used to identify the chemicals most likely to treat the disease. That is, the method predicts “chemical treats disease” relationships for the patient's disease and takes the top-ranked chemicals. The number of selected chemicals may be determined by the number of wells on the plate. Each chemical is assigned to one or more wells on the plate. The robot automatically selects those chemicals from storage and places them in the microtiter plate in known wells.
[0091] Another assay is used to determine the effect of each chemical on the patient cells. As an example, high throughput ELISpot assays such as the one available from R&D Systems (for reference, see https://www.rndsystems.com/products/dual-color-elispot-kits) could be used to determine which chemicals elicit a response from immune system cells. The number of spots indicates how effective the chemical is for the patient. Finally, the most effective chemical for the particular patient's case may be selected for use in treatment.
[0092] As will be appreciated by those skilled in the art, the methods described herein are dependent on having enough information to construct the KG and documents to begin with. In some domains, this could be problematic. However, in the biomedical domain, and specifically in the field of the biomedical application scenarios described above, sufficient data is already available.
[0093] Many modifications and other embodiments of the invention set forth herein will come to mind to the one skilled in the art to which the invention pertains having the benefit of the teachings presented in the foregoing description and the associated drawings. Therefore, it is to be understood that the invention is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
[0094] While subject matter of the present disclosure has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive. Any statement made herein characterizing the invention is also to be considered illustrative or exemplary and not restrictive as the invention is defined by the claims. It will be understood that changes and modifications may be made, by those of ordinary skill in the art, within the scope of the following claims, which may include any combination of features from different embodiments described above.
[0095] The terms used in the claims should be construed to have the broadest reasonable interpretation consistent with the foregoing description. For example, the use of the article “a” or “the” in introducing an element should not be interpreted as being exclusive of a plurality of elements. Likewise, the recitation of “or” should be interpreted as being inclusive, such that the recitation of “A or B” is not exclusive of “A and B,” unless it is clear from the context or the foregoing description that only one of A and B is intended. Further, the recitation of “at least one of A, B and C” should be interpreted as one or more of a group of elements consisting of A, B and C, and should not be interpreted as requiring at least one of each of the listed elements A, B and C, regardless of whether A, B and C are related as categories or otherwise. Moreover, the recitation of “A, B and/or C” or “at least one of A, B or C” should be interpreted as including any singular entity from the listed elements, e.g., A, any subset from the listed elements, e.g., A and B, or the entire list of elements A, B and C.