LOCATING SUSPECT TRANSACTION PATTERNS IN FINANCIAL NETWORKS
20230214842 · 2023-07-06
Inventors
Cpc classification
G06N5/01
PHYSICS
G06Q20/4016
PHYSICS
G06N3/042
PHYSICS
International classification
G06Q20/40
PHYSICS
Abstract
An approach for locating suspect patterns of transactions in a financial network may be provided. The approach may include generating a transaction graph for a financial network by processing transaction data defining transfers between accounts in that network. The approach may include modifying the transaction graph to include synthetic suspect transaction patterns at multiple locations in the graph and extracting subgraphs from the transaction graph. The approach may include training a graph neural network model to classify subgraphs containing a synthetic suspect transaction pattern as suspect. The approach may also include locating suspect transaction patterns in a new financial network by generating a new transaction graph for that network and classifying a subgraph of the new financial network as suspect.
Claims
1. A computer-implemented method for locating suspect patterns of transactions in a financial network, the method comprising: generating a first transaction graph for a first financial network; modifying the first transaction graph at a plurality of locations in the first transaction graph to include one or more synthetic suspect transaction patterns; extracting a first set of one or more subgraphs from the transaction graph, wherein each of the one or more subgraphs of the first set is comprised of a randomly-selected node and a group of nodes reachable from the randomly-selected node via corresponding edges in the transaction graph; training a graph neural network model to classify a subgraph as suspect with the first set of the one or more extracted subgraphs; providing a second financial network to the trained graph neural network model; and classifying one or more subgraphs of the second financial network as suspect, based on the trained graph neural network model.
2. The computer-implemented method of claim 1, wherein classifying one or more subgraphs in the second financial network further comprises: generating a second transaction graph for the second financial network; extracting a second set of one or more subgraphs from the second transaction graph; and supplying the second set of one or more subgraphs to the graph neural network model for classification.
3. The computer-implemented method of claim 1, further comprising: responsive to one or more of the subgraphs from the second set of subgraphs being classified as suspect, initiating an action in the second financial network, wherein the action is comprised of at least one of the following: sending an alert, indicating accounts corresponding to nodes in a subgraph to a network controller in the second financial network; flagging accounts corresponding to nodes in a subgraph for monitoring in the second financial network; and freezing accounts corresponding to nodes in a subgraph.
4. The computer-implemented method of claim 1, further comprising: extracting a third set of subgraphs from the second financial network, wherein each node in the second transaction graph is included in at least one subgraph of the third set of subgraphs.
5. The computer-implemented method of claim 1, wherein extracting the first set of one or more subgraphs further comprises: selecting all nodes in the first transaction graph up to a boundary, wherein the boundary is a predetermined number of hops in the first transaction graph away from the randomly-selected node; and for each neighboring node outside the boundary of each node on the boundary, determining whether to move the boundary to a neighboring node outside of the boundary based on a set of vectors for the neighboring node and a set of vectors for a predefined set of nodes.
6. The computer-implemented method of claim 5, wherein determining whether to move the boundary further comprises: comparing a distribution of transaction values in the set of vectors for the neighboring node to the distribution of transaction values over the set of vectors for the predefined set of nodes.
7. The computer-implemented method of claim 6, wherein the predefined set of nodes is comprised of a set of nodes on the boundary.
8. The computer-implemented method of claim 6, wherein: each node of the first transaction graph is one of the following: account types, a private-user account, and a company account; and wherein the predefined set of nodes have a same account type as the neighboring node.
9. The computer-implemented method of claim 1, wherein modifying the first transaction graph further comprises: providing a synthetic suspect pattern graph for a transaction pattern; comparing the pattern of transfer amounts for nodes in the synthetic suspect pattern subgraph to patterns of transaction values for nodes in the first transaction graph; identifying one or more corresponding transaction patterns in the first transaction graph; and modifying the identified one or more corresponding transaction patterns in the first transaction graph to match the synthetic suspect pattern subgraph.
10. The computer-implemented method of claim 9, wherein providing a synthetic suspect pattern graph further comprises: training a generative graph neural network to generate synthetic suspect pattern graphs based on a seed set of suspect transaction pattern graphs; and generating one or more synthetic suspect transaction pattern graphs based on the trained generative graph neural network.
11. The computer-implemented method of claim 9, further comprising: extracting a plurality of synthetic suspect transaction pattern subgraphs from an initial set of synthetic suspect transaction patterns, wherein each set of synthetic suspect transaction patters comprises a randomly-selected subset of nodes in a synthetic suspect transaction pattern subgraph; and storing the plurality of synthetic suspect transaction pattern subgraphs as additional suspect transaction pattern subgraphs for synthetic suspect transaction patterns for inclusion in the first transaction graph.
12. A computer system for locating suspect patterns of transactions in a financial network, the system comprising: one or more computer processors; one or more computer readable storage devices; and computer program instructions to: generate a first transaction graph for a first financial network; modify the first transaction graph at a plurality of locations in the first transaction graph to include one or more synthetic suspect transaction patterns; extract a first set of one or more subgraphs from the transaction graph, wherein each of the one or more subgraphs of the first set is comprised of a randomly-selected node and a group of nodes reachable from the randomly-selected node via corresponding edges in the transaction graph; train a graph neural network model to classify a subgraph as suspect with the first set extracted subgraphs; provide a second financial network to the trained graph neural network model; and classify one or more subgraphs of the second financial network as suspect, based on the trained graph neural network model.
13. The computer system of claim 12, further comprising instructions to: generate a second transaction graph for the second financial network; extract a second set of one or more subgraphs from the second transaction graph; and supply the second set of one or more subgraphs to the graph neural network model for classification.
14. The computer system of claim 12, further comprising instructions to: responsive to one or more of the subgraphs from the second set of subgraphs being classified as suspect, initiate an action in the second financial network, wherein the action is comprised of at least one of the following: sending an alert, indicating accounts corresponding to nodes in the subgraph, to a network controller in the second financial network; flagging accounts corresponding to nodes in the subgraph for monitoring in the second financial network; and freezing accounts corresponding to nodes in the subgraph.
15. The computer system of claim 12, further comprising instructions to: extracting a third set of subgraphs from the second financial network, wherein each node in the second transaction graph is included in at least one subgraph of the third set of subgraphs.
16. The computer system of claim 12, wherein extracting the first set of one or more subgraphs further comprises: select all nodes in the first transaction graph up to a boundary wherein the boundary is a predetermined number of hops in the first transaction graph away from the randomly-selected node; and for each neighboring node, outside said boundary, of each node on the boundary, determine whether to move the boundary to a neighboring node outside of the boundary, based on a set of vectors for the neighboring node and a set of vectors for a predefined set of nodes.
17. A computer program product for locating suspect patterns of transactions in a financial network, the computer program product comprising one or more computer readable storage device and program instructions sorted on the one or more computer readable storage device to: generate a first transaction graph for a first financial network; modify the first transaction graph at a plurality of locations in the first transaction graph to include one or more synthetic suspect transaction patterns; extract a first set of one or more subgraphs from the transaction graph, wherein each of the one or more subgraphs of the first set is comprised of a randomly-selected node and a group of nodes reachable from the randomly-selected node via corresponding edges in the transaction graph; train a graph neural network model to classify a subgraph as suspect with the first set extracted subgraphs; provide a second financial network to the trained graph neural network model; and classify one or more subgraphs of the second financial network as suspect, based on the trained graph neural network model.
18. The computer program product of claim 17, further comprising program instructions to: generate a second transaction graph for the second financial network; extract a second set of one or more subgraphs from the second transaction graph; and supply the second set of one or more subgraphs to the graph neural network model for classification.
19. The computer program product of claim 17, further comprising program instructions to: responsive to one or more of the subgraphs from the second set of subgraphs being classified as suspect, initiate an action in the second financial network, wherein the action is comprised of at least one of the following: sending an alert, indicating accounts corresponding to nodes in the subgraph, to a network controller in the second financial network; flagging accounts corresponding to nodes in the subgraph for monitoring in the second financial network; and freezing accounts corresponding to nodes in the subgraph.
20. The computer program product of claim 17, further comprising program instructions to: extract a third set of subgraphs from the second financial network, wherein each node in the second transaction graph is included in at least one subgraph of the third set of subgraphs.
Description
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
DETAILED DESCRIPTION
[0027] The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
[0028] The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
[0029] Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
[0030] Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
[0031] Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
[0032] These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
[0033] The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
[0034] The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
[0035] Embodiments to be described can be performed as computer-implemented methods for locating suspect transaction patterns in a financial network. Such methods may be implemented by a computing system comprising one or more general- or special-purpose computers, each of which may comprise one or more (real or virtual) machines, providing functionality for implementing operations described herein. Steps of methods embodying the invention may be implemented by program instructions, e.g., program modules, implemented by a processing apparatus of the system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. The computing system may be implemented in a distributed computing environment, such as a cloud computing environment, where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
[0036]
[0037] Bus 4 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
[0038] Computer 1 typically includes a variety of computer readable media. Such media may be any available media that is accessible by computer 1 including volatile and non-volatile media, and removable and non-removable media. For example, system memory 3 can include computer readable media in the form of volatile memory, such as random access memory (RAM) 5 and/or cache memory 6. Computer 1 may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system 7 can be provided for reading from and writing to a non-removable, non-volatile magnetic medium (commonly called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can also be provided. In such instances, each can be connected to bus 4 by one or more data media interfaces.
[0039] Memory 3 may include at least one program product having one or more program modules that are configured to carry out functions of embodiments of the invention. By way of example, program/utility 8, having a set (at least one) of program modules 9, may be stored in memory 3, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data, or some combination thereof, may include an implementation of a networking environment. Program modules 9 generally carry out the functions and/or methodologies of embodiments of the invention as described herein.
[0040] Computer 1 may also communicate with: one or more external devices 10 such as a keyboard, a pointing device, a display 11, etc.; one or more devices that enable a user to interact with computer 1; and/or any devices (e.g., network card, modem, etc.) that enable computer 1 to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces 12. Also, computer 1 can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter 13. As depicted, network adapter 13 communicates with the other components of computer 1 via bus 4. Computer 1 may also communicate with additional processing apparatus 14, such as a GPU (graphics processing unit) or FPGA, for implementing embodiments of the invention. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer 1. Examples include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc.
[0041] The
[0042] Logic modules 24 through 27 interface with memory 21 which stores various data structures used in operation of system 20. These data structures include a reference transaction graph (G.sub.R) 35 produced by graph generator 24 from the transaction data 29, data defining a set of synthetic suspect transaction patterns 36 which are imposed on the transaction graph, and data defining a set of subgraphs 37 (with associated labels explained below) which are extracted from the resulting graph. These subgraphs are used by model generator 27 for training GNN model 30. After the training phase, transaction data received from financial network 23 is stored at 38 in memory 21, and a new transaction graph (G.sub.N) 39 is generated for the network 23. New subgraphs 40 are extracted from this transaction graph, and these subgraphs are then classified by GNN model 30, yielding classification results 41.
[0043] In general, functionality of logic modules 24 through 27 may be implemented by software (e.g., program modules) or hardware or a combination thereof. Functionality described may be allocated differently between system modules in other embodiments, and functionality of one or more modules may be combined. The various components of system 20 may be provided in one or more computers of a computing system. For example, all modules may be provided in a computer 1 at which UI 33 is displayed to a user, or modules may be provided in one or more computers/servers to which user computers can connect via a network (which may comprise one or more component networks and/or internetworks, including the Internet). System 20 may be local or remote from the financial network 23 to be monitored, and may be integrated in the network in some embodiments. System memory 21 may be implemented by one or memory/storage components associated with one or more computers of system 20.
[0044] The reference transaction data 29 can be compiled from transaction records for accounts in a reference financial network, where this reference network may comprise one or more component networks for one or more banks, consortia of banks, and/or other monetary processing systems (including electronic transaction systems such as the Bitcoin transaction network) in which transfers are made between user accounts. This transaction data can be compiled over a period of time such as a number of months (e.g., a financial quarter), a financial year, or longer periods of multiple years for which transaction records are available. Preferred embodiments use transaction data acquired over at least a year of operation of a large reference network comprising multiple component transaction networks. For each transaction, the reference transaction data comprises a sender account identifier, a receiver account identifier, a transfer amount and a transaction timestamp indicating date and time of the transaction. Additional data, such as an account type (e.g., private user account or company/business account) may also be provided and can be exploited by embodiments below. The reference transaction data 29 may be accessed dynamically from one or more remote databases, or may be precompiled for system operation and stored in system memory 21.
[0045]
[0046] As
[0047] When the reference transaction graph 35 has been generated and stored in step 45 of
[0048] The GNN model 30 is adapted to receive input subgraphs and to produce node embeddings for the subgraph via a message passing process. Message passing (MP) is a well-known technique in GNN architectures via which “messages” are passed between nodes of a graph and the “state” of each node is iteratively updated based on the messages that node receives. The state of a node is defined by a feature vector, and a message is defined as some function of the feature vectors of that node's neighbors in the graph. The node state is then updated as some function of its current state and the received message. Messages are passed iteratively, whereby all node states are successively updated with each MP step to obtain the final node embeddings. In the example of
[0049] While the GNN layer structure can be based on any convenient GNN architecture, e.g., graph attention networks (GAT), message passing neural networks (MPNN), temporal graph networks (TGN), preferred embodiments employ a GCN (Graph Convolutional Network) architecture using Graph Isomorphism Network (GIN) layers. The number of hidden layers 56, and hence message passing steps, is determined based on dimensions of subgraphs in the training set, e.g., as the average number of nodes in the training subgraphs. A variety of MP algorithms may also be employed here, such as a sum of the node feature vectors, or a sum in combination with the computation of the divergence or curl, e.g., the sum of the curl of the node feature vectors to be aggregated during the message passing step.
[0050] The node embeddings output by the GNN are aggregated (e.g., concatenated or otherwise combined) in an aggregator module 58 to provide the final embedding for each subgraph. The subgraph embedding is input to a classifier 59 which can be implemented in known manner, conveniently as a simple feedforward neural network. Classifier 59 also receives the label, described above, for an input subgraph. Model 30 can thus be trained to correctly classify input subgraphs via a well-known supervised learning process. In this process, the TR module 31 of model generator 27, supplies subgraphs iteratively to the model, and the model parameters (e.g., neural network weights in the message passing layers) are progressively updated based on backpropagation of errors between the model output and the subgraph label. For this training operation, TR module 31 typically splits the training dataset into training, validation and test sets, where the training set is used for initial model training, the validation set is used to fine-tune model parameters, and the test set is used to assess final model performance.
[0051] When training of GNN model 30 is complete, system 20 can be applied to locate suspect transaction patterns in a new financial network 23. In step 49 of
[0052] A synthetic pattern 35 used in graph modification step 46 is conveniently defined by a “pattern graph” comprising nodes, representing respective accounts involved in the pattern, interconnected by edges representing transfers between these accounts.
[0053] In some embodiments, the entire set of synthetic patterns 35 might be predefined by AML experts. However, preferred embodiments below use techniques for augmenting a basic “seed set” of expert-defined patterns to generate additional synthetic patterns for inclusion in the graph. Graph modifier 25 may impose synthetic patterns on the transaction graph in various ways. As a simplistic example, a pattern involving m interconnected nodes might be imposed by selecting m similarly interconnected nodes in G.sub.R and adding the transfers in the synthetic pattern into the transaction matrices for these nodes. Preferred techniques for imposing the synthetic patterns on G.sub.R will be described in detail below. Each pattern 36 can be included at multiple locations in G.sub.R, and patterns can be overlayed via multiple iterations of the graph modification process to provide a complex scenario for subsequent processing. For each instance of each pattern imposed on the graph, graph modifier 25 records the pattern location (i.e., the set of nodes involved in the pattern), e.g., by annotating nodes of G.sub.R with a pattern identifier for later identification of synthetic pattern locations.
[0054] In subgraph extraction step 47 of
[0055] When extracting subgraphs 39 from the new transaction graph G.sub.N in step 51 of
[0056] In step 53 of
[0057] It will be seen that the above system focusses on aggregate characteristics of patterns of transfers, based on temporally summarized transaction activity, in a transaction graph for a real financial network with embedded synthetic patterns of fraud. By training GNN model 30 on subgraphs extracted from this data structure, detection of suspect activity is based on qualitative, rather than quantitative, properties of transaction patterns. The resulting system can efficiently identify suspect transaction patterns in any financial network 23, unconstrained by predefined transaction limits or other hardwired AML heuristics. This not only offers improved detection of fraudulent activity, but also inhibits attempts by fraudsters to circumvent known AML processes.
[0058] Steps of preferred methods embodying the invention will now be described in more detail.
[0059] In step 62 of
[0060] In step 65, the graph modifier then modifies the transaction graph G.sub.R to include all synthetic patterns in the augmented pattern set. In step 66 of this embodiment, the graph modifier further modifies the transaction graph to include some additional synthetic patterns via heuristics-based rules defining these patterns. For example, circular transaction patterns involving a defined number of nodes can be imposed by selecting a corresponding number of nodes in the graph and applying a defined operation to values in the transaction matrices for these nodes, e.g., to include circular transfers of equal or correlated transfer amounts. As a further example, additional transfer amounts may be included in transaction matrices for individual nodes to amount to high-value, and hence potentially suspicious, transactions. Such heuristics can be applied on the graph through a pipeline where the rules are executed. The graph modifier of this embodiment thus serves as a synthetic ground-truth generator, augmenting the initial seed set of ground-truth patterns, as well as accommodating a subset of patterns based on conventional heuristics-based rules.
[0061]
[0062]
[0063] While an example of the function F is given above, this function may alternatively (or additionally) determine whether the neighbor node's activity is notably different to a “typical node” in the transaction graph. As an example, in some embodiments the transaction graph may include, for each node, an account-type attribute indicating one of a plurality of account types, e.g., private-user account or company account, for the account represented by that node. Where multiple such account types are defined, these can be encoded in node attributes such that similar account types have encodings with low distance. In step 83 of
[0064] The subgraph extractor thus iteratively decides whether to move the boundary of a given subgraph until the stop condition is satisfied. By selectively moving boundaries in this way, subgraphs can be defined dynamically so as to include nodes likely to be involved in suspect transactions.
[0065]
[0066]
[0067] It will be seen that the above embodiments offer highly efficient systems for detecting indicators of fraud over aggregates of transactions in financial networks. However, numerous changes and modifications can be made to the exemplary embodiments described. For example, transaction values in matrices M.sub.i may be calculated as various functions of the accumulated transfer amounts in the time interval Δt. Transaction graphs may also include additional metadata associated with nodes and/or edges of the graph. For instance, G.sub.R may include, for each edge, at least one edge property indicating a transfer amount and transfer time for a transfer represented by that edge. Such edge properties can then be assimilated in the message passing process for updating node states in model 30, thus encoding this information in subgraph embeddings. Additionally or alternatively, G.sub.R may include, for each edge, at least one edge property derived by Fourier analysis of the temporal distribution of transfer amounts between nodes interconnected by that edge. For example, the main modalities of the transformation may be included as edge properties. This accommodates information in the frequency domain, as well as the temporal domain, in the subgraph embeddings to assist identification of periodicity in transaction patterns.
[0068] In general, where features are described herein with reference to a method embodying the invention, corresponding features may be provided in a system/computer program product embodying the invention, and vice versa.
[0069] The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.