Immutable Storage as a Machine Learning Archive Mechanism
20220350524 · 2022-11-03
Assignee
Inventors
Cpc classification
G06F16/27
PHYSICS
G06F3/0604
PHYSICS
H04L2209/56
ELECTRICITY
G06F3/0655
PHYSICS
G06F3/0679
PHYSICS
International classification
Abstract
An apparatus and method for providing an immutable audit trail for machine learning applications is described herein. The audit trail is preserved by recording the machine learning models and data in a data structure in immutable storage such as a WORM device, or in a blockchain. The immutable audit trail is important for providing bank auditors with the reasons for lending or account opening reasons, for example. A graphical user interface is described to allow the archive of machine learning models to be viewed.
Claims
1. An apparatus for archiving machine learning models, the apparatus comprising: a special purpose multi-core server; a write-once-read-many hardware storage facility electrically connected to the special purpose multi-core server; an application that executes code on the special purpose multi-core server; a machine learning engine that receives data from the application and returns a result to the application; and a machine learning model integrated with the machine learning engine, wherein the machine learning model updates periodically; wherein the machine learning model is stored in the write-once-read-many hardware storage facility by the special purpose multi-core server when the machine learning model is updated.
2. The apparatus of claim 1 wherein the write-once-read-many hardware storage facility is a compact disk device.
3. The apparatus of claim 1 wherein the write-once-read-many hardware storage facility is a digital versatile disc recordable device.
4. The apparatus of claim 1 wherein the write-once-read-many hardware storage facility is a read only flash memory device.
5. The apparatus of claim 1 wherein the write-once-read-many hardware storage facility is a read only memory chip.
6. The apparatus of claim 1 further comprising customer data that is stored in the write-once-read-many hardware storage facility by the special purpose multi-core server.
7. The apparatus of claim 6 wherein the customer data relates to banking.
8. The apparatus of claim 6 wherein the customer data is stored each time the machine learning engine is called by the application.
9. The apparatus of claim 6 wherein the result is stored in the write-once-read-many hardware storage facility by the special purpose multi-core server.
10. The apparatus of claim 6 wherein the machine learning model is updated when the customer data is used to train the machine learning model.
11. A method for archiving machine learning models, the method comprising: receiving data from an application at a machine learning engine running on a special purpose multi-core server; calling a machine learning model by the machine learning engine; executing the machine learning model using the data to determine a result; returning the result to the machine learning engine and to the application; updating the machine learning model by the special purpose multi-core server; and storing the machine learning model in a write-once-read-many hardware storage facility when the machine learning model is updated.
12. The method of claim 11 further comprising storing customer data in the write-once-read-many hardware storage facility by the special purpose multi-core server.
13. The method of claim 12 wherein the customer data relates to banking.
14. The method of claim 12 wherein the customer data is stored each time the machine learning engine is called by the application.
15. The method of claim 12 wherein the result is stored in the write-once-read-many hardware storage facility by the special purpose multi-core server.
16. The method of claim 12 wherein the machine learning model is updated when the customer data is used to train the machine learning model.
17. The method of claim 11 wherein the machine learning model is updated periodically.
18. The method of claim 11 wherein the write-once-read-many hardware storage facility is a compact disk device.
19. The method of claim 11 wherein the write-once-read-many hardware storage facility is a digital versatile disc recordable device.
20. The method of claim 11 wherein the write-once-read-many hardware storage facility is a read only memory chip.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0015]
[0016]
[0017]
[0018]
[0019]
DETAILED DESCRIPTION
[0020] There is a strong need in the machine learning industry to provide an audit trail for machine learning models, so that bank regulators, legal teams, and internal auditors can verify that the decisions by the machine learning software was done without bias.
[0021]
[0022] A sample model is seen in
[0023] For a banking model 103, the input parameters may include such factors as total assets 504, total debt 505, total income 506, total expenses 507, zip code 508, and other criteria, and the model may decide whether to allow or deny a loan. In some uses, the model recommends and a human reviews the model's recommendation (called supervised machine learning), either approving or disapproving the model's result. If the human overrules the machine, the results are fed back into the training set so that the machine/model learns from the mistake.
[0024] Every time the model 103 is updated, the model is stored in the immutable storage 106 so that there is an auditable trail of why decisions were made by the machine. The immutable storage 106 could be a form of write once, read many hardware device such as a compact disk devices (CR-R) and write only Digital Versatile Disc Recordable (DVD-R) device. Read only memory chips could also be used in some embodiments. A version of the Secure Digital flash memory card exists in which the internal microprocessor does not allow rewrites of any block of the memory. There are multiple vendors providing Magnetic Storage technologies including (but not limited to) NetApp, EMC Centera, KOM Networks, and others. Prevention of rewrite is done at the physical disk level and cannot be modified or overridden by the attached computer.
[0025] Blockchain technology also provides an immutable ability to store the reasoning concerning a decision so that a bank examiner or auditor can double check the decision. Blockchain technology is unmodifiable yet viewable, and can be added to without impacting the previously stored data. This allows new archive information to be added to the chain without impacting the integrity of the previously stored information.
[0026] A simple blockchain is nothing more than a list of transactions that is chained or linked together using novel data structures, and some basic cryptographic principles. Anyone can make a blockchain, and create blocks. To create secure, immutable blockchains, the blockchain must be distributed, and the ability to create blocks must be more effort and cost than the value of what is being protected. The rules need to provide mathematical certainty from how transactions are “signed” through to how much “proof of work” needs to accompany a block.
[0027] There are many parts to the secure blockchain's rulebook, and each and every part is essential to the scheme's overall security; no single element (including the blockchain) is enough to secure the transactions. The rules ensure that the computing power required to tear out a recent block, alter a transaction, then re-create all subsequent blocks is more than any attacker and even most governments could amass.
[0028] Any of these technologies could be used to provide the immutable storage 106. In many embodiments, the training module 101 is run periodically, so the model 103 is updated periodically. This means that a reasonably small number of model 103 versions need to be stored. However, in some embodiments, particularly in supervised machine learning models, the model is updated in real time (or close to real time) as a user corrects the machine's decisions. In these cases the model 103 would need to be stored with high frequency in the immutable storage 106. Special hardware may need to be implemented to store the volume of models 103 and at a performance that can handle the rapidity in which the model 103 changes.
[0029] In some embodiments, the data (
[0030] A model audit graphical user interface 107 could also be provided to review the history of the model changes in the immutable storage 106. This audit GUI 107 could allow search capabilities as well as an ability to list when and how often the model changed. Graphical capabilities to visually present the model are features of some embodiments (i.e. show the curve on an x-, y-, z-axis, similar to
[0031] In addition the audit GUI 107 could be used to reset the model 103 back to a certain time frame to allow selective running of the model with a certain set of data. Another feature could set the model 103 to a certain time or to certain values, allowing the machine learning engine 104 to resume using the GUI 107 selected model 103. This feature is useful if the machine has been incorrectly trained through improper supervised learning, allowing the impact to be reversed by rewinding the model.
[0032] The model audit GUI 107 is not permitted to delete or alter the data in the immutable storage 106, although the GUI 107 may be allowed to add to the immutable storage 106 to annotate the model records. This allows comments on why a historical or a new model was forced by the GUI 107 into the machine learning model 103. The GUI 107 could also have facilities for printing or exporting one or more models to another program.
[0033] In some embodiments, the data input 504, 505, 506, 507, 508 to the model 103 is also stored in the immutable storage 106. In this embodiment, the GUI 107 could allow the running of a data set or a range of data sets to be run through the current model 103 and the results returned to the GUI 107.
[0034] Looking to
[0035] The next data field in the record 300 is the model version 302. This is an optional field that specifies the version of the model 103 and could simply be an incremented counter.
[0036] A timestamp 303 is the next field in the record 300, and records both a time and a data value. The timestamp 303 could be a two element array of the timestamp when the model began use and the timestamp when the model ended use. In other embodiments, either the beginning or ending timestamp is stored, and the other time implied by the previous or next record.
[0037] A machine learning algorithm 304 is next stored in the record 300 in this example embodiment. The algorithm 304 could be a string describing the type of curve used or a constant specifying the type of curve or algorithm used. In the example used in
[0038] The next field specifies the order 305 of a polynomial or other formula. This field is not used in all algorithms. This field is in the
[0039] The next four or more fields contain constants 306, 307, 308, 309 for the algorithm. In the
[0040] Because of the complexities of machine learning algorithms, special purpose computing may be needed to build and execute the machine learning model described herein.
[0041] The server 403 is a high performance computing machine electrically connected to the network 402 and to the storage facilities 404. Furthermore, the server 403 is electrically connected to the immutable storage 405. Immutable storage facilities 405 are not common to most servers 403, but are used in the inventions described herein. In some embodiments, the immutable storage 405 is located in the cloud and indirectly connected to the server 403 through a network.
[0042] In order to preserve a complete archive of the machine learning transactions, some embodiments will also store all data that is run through the model 103 in the immutable storage 106. This data could look similar to that illustrated in
[0043] The first data field in the record 500 is a link to the next record 501 in the linked list of records 500. In most embodiments, the next record in the linked list contains the previous version of the model 103 (newest to oldest). In other embodiments, the record sizes could be fixed and the next record 501 would be unnecessary. The next record 501 could run the linked list from the oldest to the newest in other embodiments without detracting from the invention, although buffering techniques would need to be employed to hold the newest record 500 until the next model is to be saved so that the next record link 501 is known before writing. In other embodiments, the location is allocated before use so that it is known when the previous record is stored. In some embodiments, this link 501 is a two element array with one link pointing to the next record and the other link pointing to the previous record. In still another embodiment, a separate table of links to records is stored separate from the data to point to the records.
[0044] The next data field in the record 500 is the model version 502. This field specifies the version of the model 103 that processed the data 500. In some embodiments, rather than a version number, a link to the immutable storage 106 record 300 for the model is stored in field 502.
[0045] A timestamp 303 is the next field in the record 300. The timestamp 303 represents the date and time when the record is processed by the model 103.
[0046] After the timestamp field 303, the input parameters for the model are stored: customer assets 504, customer debt 505, customer income 506, customer expenses 507, and customer zip code 508. Any of these customer input fields could be deleted or additional fields added without detracting from the inventions herein. These fields, in most implementations, would be the same fields in the training dataset 102, with the addition of the correct result 510.
[0047] The next field that needs to be stored in the data record 500 is the machine learning result 509. This preserves the outcome of the model based on the above customer data set 504, 505, 506, 507, 508. In most embodiments, this value is a Boolean.
[0048] The final field is the human result 510. This is the result that a trainer or supervisor of the model 103 gives to the given user dataset 504, 505, 506, 507, 508. If this value does not agree with the machine learning result 509, then the machine model may need to be re-trained using the data. In some embodiments, a comment may be attached to the record 500 to memorialize the trainer's thinking in overriding the model 103. In some machine learning embodiments, the result 510 is not based on a human trainer but on another machine or real-world result. For instance, if the loan approved by the machine learning model defaults, then the result 510 may change to a negative value, and the data used to retrain the model.
[0049] The foregoing devices and operations, including their implementation, will be familiar to, and understood by, those having ordinary skill in the art.
[0050] The above description of the embodiments, alternative embodiments, and specific examples, are given by way of illustration and should not be viewed as limiting. Further, many changes and modifications within the scope of the present embodiments may be made without departing from the spirit thereof, and the present invention includes such changes and modifications.