SYSTEMS AND METHODS FOR MACHINE LEARNING OPTIMIZATION
20220327382 · 2022-10-13
Inventors
Cpc classification
G06F11/0712
PHYSICS
G06N5/01
PHYSICS
G06F18/2148
PHYSICS
International classification
Abstract
A computing system optimises a machine learning process. At the application level, the computing system comprises: a processing master pod maintaining a shared work queue comprising machine learning model training operations, each model training operation comprising an associated set of hyperparameter configurations to be evaluated during the course of the training operation, wherein each training operation is executed for a pre-defined number of iterations; a shared repository storing records, each record corresponding to one of the model training operations in the shared work queue; and processing worker pods, and: accessing a model training operation; retrieving the corresponding record for the accessed model training operation; executing the pre-defined number of iterations for each of the obtained one or more model training operations; and for each executed iteration, outputting evaluation result data associated with the corresponding iteration to the shared repository for storage in the corresponding record.
Claims
1. A computing system for optimising a machine learning process, the computing system being implemented using a cluster computing infrastructure comprising a plurality of computing nodes, the computing system comprising at an application level: a processing master pod arranged to manage the optimisation, the processing master pod being configured to maintain a shared work queue comprising a plurality of machine learning model training operations, each model training operation comprising an associated set of hyperparameter configurations to be evaluated during the training operation, wherein each training operation is configured to be executed for a pre-defined number of iterations; a shared repository configured to store a plurality of records, each record corresponding to one of the model training operations in the shared work queue; and a plurality of processing worker pods, each worker pod being in operative communication with the shared work queue and the shared repository, and being configured to: access, from the shared work queue, a model training operation; retrieve, from the shared repository, the corresponding record for the accessed model training operation; execute the pre-defined number of iterations for the accessed model training operation; and for each executed iteration, output evaluation result data associated with the corresponding iteration to the shared repository for storage in the corresponding record.
2. The computing system of claim 1, wherein each model training operation has an associated completion time period within which execution of each of the iterations is to be completed.
3. The computing system of claim 2, wherein upon expiration of the completion time period, if the execution of the corresponding iteration is incomplete: the iteration is deemed to not have been successful; and the model training operation is configured to be returned to the shared work queue for access and execution by a different one of the plurality of processing worker pods.
4. The computing system of claim 2, wherein each worker pod is configured to, after executing each iteration of the model training operation, reset the completion time period in relation to a subsequent iteration of the model training operation.
5. The computing system of claim 1, wherein each worker pod is further configured to, upon retrieving the corresponding record for the model training operation: determine whether the accessed model training operation has previously been executed by a different processing worker pod, and if so, determine a last successful iteration of the model training operation; and implement the executing and outputting steps in respect of each of the remaining iterations that are not deemed successful.
6. The computing system of claim 5, wherein each worker pod is configured to, upon determining that the accessed model training operation has previously been executed by a different processing worker pod, access the corresponding record in the shared repository and delete any evaluation result data stored in association with the record in respect of all iterations subsequent to the last successful iteration.
7. The computing system of claim 1, wherein the set of hyperparameter configurations for each model training operation comprises one or more of the following: (a) a combination of hyperparameter input values; (b) a hyperparameter search space; (c) an objective metric to be achieved as a result of the model training operation; and (d) a search algorithm to be used.
8. The computing system of claim 7, wherein where the set of hyperparameter configurations comprises a combination of hyperparameter input values, these hyperparameter values are randomly generated.
9. The computing system of claim 7, wherein where the set of hyperparameter configurations comprises a search algorithm to be used, this search algorithm corresponds to a random search function or a grid search function.
10. A computer-implemented method for optimising a machine learning process comprising: creating, by a processing master pod, a shared work queue comprising a plurality of machine learning model training operations, each model training operation comprising an associated set of hyperparameter configurations to be evaluated during the training operation, wherein each training operation is configured to be executed for a pre-defined number of iterations; maintaining, by a shared repository, a plurality of stored records, each record corresponding to one of the model training operations in the shared work queue; and for each of a plurality of processing worker pods in operative communication with the shared work queue and the shared repository: accessing, from the shared work queue, a model training operation; retrieving, from the shared repository, the corresponding record for the accessed model training operation; executing the pre-defined number of iterations for the accessed model training operation; and for each executed iteration, outputting evaluation result data associated with the corresponding iteration to the shared repository for storage in the corresponding record.
11. The method of claim 10, further comprising, upon retrieving, by the processing worker pod and from the shared repository, the corresponding record for the model training operation: determining, by the processing worker pod, whether the accessed model training operation has previously been executed by a different processing worker pod, and if so, determine a last successful iteration of the model training operation; and implementing, by the processing worker pod, the executing and outputting steps in respect of each of the remaining iterations that are not deemed successful.
12. The method of claim 11, wherein upon determining that the accessed model training operation has previously been executed by a different processing worker pod: accessing, by a currently implementing worker pod, the corresponding record in the shared repository; and deleting, by the currently implementing worker pod, any evaluation result data stored in association with the record in respect of all iterations subsequent to the last successful iteration.
13. The method of claim 10, wherein each model training operation has an associated completion time period within which execution of each of the iterations is to be completed, and preferably wherein upon expiration of the completion time period, if the execution of the corresponding iteration is incomplete: the iteration is deemed to not have been successful; and the model training operation is configured to be returned to the shared work queue for access and execution by a different one of the plurality of processing worker pods.
14. The method of claim 13, further comprising: resetting, by the worker pod and after executing each iteration of the model training operation, the completion time period in relation to a subsequent iteration of the model training operation.
15. The method of claim 10, wherein the machine learning model is a neural network.
16. A computer storage medium having computer-executable instructions that, upon execution by a processor, cause the processor to at least: create, by a processing master pod, a shared work queue comprising a plurality of machine learning model training operations, each model training operation comprising an associated set of hyperparameter configurations to be evaluated during the training operation, wherein each training operation is configured to be executed for a pre-defined number of iterations; maintain, by a shared repository, a plurality of stored records, each record corresponding to one of the model training operations in the shared work queue; and for each of a plurality of processing worker pods in operative communication with the shared work queue and the shared repository: access, from the shared work queue, a model training operation; retrieve, from the shared repository, the corresponding record for the accessed model training operation; execute the pre-defined number of iterations for the accessed model training operation; and for each executed iteration, output evaluation result data associated with the corresponding iteration to the shared repository for storage in the corresponding record.
17. The computer storage medium of claim 16, further comprising, upon retrieving, by the processing worker pod and from the shared repository, the corresponding record for the model training operation: determining, by the processing worker pod, whether the accessed model training operation has previously been executed by a different processing worker pod, and if so, determine a last successful iteration of the model training operation; and implementing, by the processing worker pod, the executing and outputting steps in respect of each of the remaining iterations that are not deemed successful.
18. The computer storage medium of claim 17, wherein upon determining that the accessed model training operation has previously been executed by a different processing worker pod: accessing, by a currently implementing worker pod, the corresponding record in the shared repository; and deleting, by the currently implementing worker pod, any evaluation result data stored in association with the record in respect of all iterations subsequent to the last successful iteration.
19. The computer storage medium of claim 16, wherein each model training operation has an associated completion time period within which execution of each of the iterations is to be completed, and preferably wherein upon expiration of the completion time period, if the execution of the corresponding iteration is incomplete: the iteration is deemed to not have been successful; and the model training operation is configured to be returned to the shared work queue for access and execution by a different one of the plurality of processing worker pods.
20. The computer storage medium of claim 19, further comprising: resetting, by the worker pod and after executing each iteration of the model training operation, the completion time period in relation to a subsequent iteration of the model training operation.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0021] One or more embodiments or aspects will now be described, by way of example only, with reference to the accompanying drawings, in which:
[0022]
[0023]
[0024]
[0025] Where the figures laid out herein illustrate embodiments of the present disclosure, these should not be construed as limiting to the scope of the disclosure. Where appropriate, like reference numerals will be used in different figures to relate to the same structural features of the illustrated embodiments.
DETAILED DESCRIPTION
[0026] A system and method for automated hyperparameter tuning according to the present disclosure will now be described, which is implemented using cluster computing resources. Initially however, in order to provide context and background for implementation of this system and method, a general description of cluster computing architecture and how this architecture may function in relation to machine learning will now be described.
[0027] In general, a cluster computing architecture comprises a plurality of processing nodes or computing machines that together make up the cluster. Specifically, each cluster comprises a ‘master’ node, and one or more ‘worker’ or ‘minion’ nodes. The master node corresponds to the main controlling unit of the cluster, and is configured to manage the workload distribution within the cluster as well as communication across the worker nodes in the system. The worker nodes host and provide the computing resources that enable various applications to be run simultaneously (i.e., in parallel) using the cluster.
[0028] More specifically, each worker node comprises or corresponds to one or more ‘pods’ that are deployed onto a given computing node entity. Each pod defines or provides a certain disk volume that is exposed to or used by one or more various applications for use in executing their desired functions, operations or computing ‘jobs’. In addition, each pod may also comprise an associated storage volume that can be used to provide a shared disk space for applications executing within that node. In even greater detail, each pod comprises one or more ‘containers’ that are co-located on the same computing entity and which can share the computing resources—disk volume and storage volume—provided by their host pod. Considered another way, a container constitutes the basic unit of the cluster computing system; one or more containers, hosted on a given computing node entity, are used to execute functions of the applications running on the cluster as desired.
[0029] The above-described cluster computing architecture is thereby able to partition and manage the resources of a computing cluster so as to maximise the parallelisation of the functions that are desired to be executed by one or more applications, and to maximise the efficiency with which an overall job can be completed by the cluster.
[0030] Now that the underlying cluster computing architecture has been described, more detail will now be provided in relation to implementing machine learning workflows using this architecture, and specifically in relation to automated hyperparameter tuning experiments. Before doing so however, it is noted that the applications of these experiments will be particularly beneficial in relation to machine learning workflows which utilise neural networks, and especially those involving deep neural networks (DNNs). Such DNNs comprise one or more ‘hidden’ layers between their input and output layers, and effectively constitute a ‘black box’ with complex internal structure and relationships (which do not necessarily attempt to read onto any real-world features); tuning of this internal structure optimises the predictive power for the outputs of the DNN based on the provided inputs. DNNs are therefore particularly useful for modelling complex non-linear relationships, and hence find implementations across a variety of technical fields, for example image processing and recognition; bioinformatics; drug discovery; financial fraud detection and medical image analysis. However, due to their complex ‘black box’-like internal structures (the added ‘hidden’ layers of abstraction within the network), it will also be appreciated that DNNs can be particularly susceptible to issues associated with overfitting; and can incur significant computational time and resource costs due to the need to optimise multiple different training parameters. The automated tuning experiments implemented using cluster computing architecture that are described herein are therefore particularly advantageous in relation to optimising such hyperparameters for DNNs, as they can provide improvements in relation to tuning those hyperparameters that are utilised to prevent overfitting. However, whilst overfitting is one of the main challenges that is addressed via hyperparameter tuning, tuning of hyperparameters that are not in place specifically to address overfitting will nevertheless contribute towards the overall goal of efficient use of computational resources during such model parameter optimisations, as will be noted subsequently.
[0031] Any given hyperparameter tuning experiment comprises multiple model training ‘jobs’, ‘operations’ or ‘trials’ (hereafter simply referred to collectively as jobs), whereby each job is defined with the aim of testing a different set or combination of hyperparameter configurations for a particular machine learning model. In more detail, an experiment begins with the definition of multiple model training jobs which involves the definition of (for each job): various different configuration settings which effectively define the job itself; the objective metric or target which is used to validate the accuracy of the machine learning model in that job; the hyperparameter search space (e.g., including a list of the hyperparameters to be optimised, maximum and minimum values of the hyperparameters and other constraints); and the search algorithm that is to be used (e.g., randomised searches, grid searches, linear regression etc.). The results of each job are output upon completion of that job and once all the jobs are completed, or after a desired pre-defined objective is reached, whichever occurs first, the experiment is complete. The outcomes of the jobs in the completed experiment are then analysed and assessed, and the results—i.e., the optimised values of the hyperparameters—are output to the user of the system.
[0032] In relation to the cluster computing architecture described above, a computing application implementing a hyperparameter tuning experiment may comprise analogous application architecture that is run using the associated ‘pods’ and ‘containers’ that are implemented using the cluster computing nodes. For example, the application architecture may comprise a master processing ‘pod’ which oversees the overall application processing and manages resource allocation to one or more worker processing pods. Using this computing application architecture, each job in a given hyperparameter tuning experiment—i.e., each machine learning model (comprising a certain combination of hyperparameter configurations) that is to be trained and tested—may be implemented using one or more of the worker pod(s) and container(s). The overall experiment is ‘overseen’ and managed by the master pod, which maintains a shared work list, schedule or work ‘queue’ (hereafter simply referred to as a queue) of all the machine learning models that are to be tested in the experiment. The individual worker pods access the queue in order to obtain or retrieve one or more jobs for execution using the application(s) running in their container(s). In this way, multiple different training jobs can be run in parallel using the plurality of worker pods in the application architecture, and also making beneficial use of the resources made available in the underlying computing cluster architecture.
[0033] An example implementation of a cluster computing architecture is Google's Kubernetes architecture which may be used for a multitude of different purposes (e.g., implementing machine learning workflows).
[0034] The computing application system of the present disclosure and an overview of how a hyperparameter tuning experiment may be implemented using such an application system will now be described in detail with reference to
[0035] The application system 1 (i.e., the application level of the computing system) comprises a master processing pod 2 (hereafter simply referred to as a ‘master pod’) and a plurality of worker processing pods 4, labelled as Worker Pod 1, Worker Pod 2, and Worker Pod N, indicating that there may be any number N of worker pods, (hereafter simply referred to as ‘worker pods’). The master pod 2 comprises, retains and manages a queue 6 of machine learning model training jobs. As set out above, each job has a particular combination of hyperparameters settings that are to be tested, and together all of the jobs in the queue 6 form a single hyperparameter tuning experiment. The master pod 2 further comprises one or more interface modules or mechanisms 8, for example an API (Application Interface Program), which provides internal and/or external interface functionality for the master pod 2, and (in the case of external interface) thereby for the whole application system 1 as a whole. For example, this interface module 8 may be accessible to the user of the application system 1 to input and configure the details of the experiment (e.g., using command-line programming on a computing device (not shown) of the user). Additionally or alternatively, the interface module 8 may also be used by the individual worker pods 4 to communicate with the master pod 2, and to thereby access the queue 6 containing the jobs that are to be executed by the worker pods 4.
[0036] In
[0037] As was also mentioned earlier, each worker pod 4 comprises a plurality of containers 10 within which one or more applications 11 may be executed in order to run the training jobs. Although only a single container “Cl” is shown in association with the worker pod 4 labelled “Worker 1” for simplicity and ease of understanding, it will be understood that the corresponding functionality can also be provided by each of the other N worker pods. Furthermore, it will be appreciated that whilst the components in the system of
[0038] Specifically, in
[0039] The system further comprises one or more data stores or repositories 12, although for ease of understanding and simplicity only a single data store is shown in
[0040] Turning now to
[0041] Subsequently, the next phase of the process involves executing the various training jobs in the queue by the plurality of worker pods 4. This phase begins by instantiating or creating a plurality of worker pods 4 in Step 115. Subsequently, each worker pod 4, having an associated amount of memory and computer processing unit (CPU) resources available to it from the underlying computing infrastructure, accesses or reads from the queue 6 in Step 120 the job(s) that it can process given its capabilities and ‘pulls’ these jobs for execution. The ability for a given worker pod 4 to ascertain its job(s) capacity is enabled by a specification associated with each job, which (among other information) defines the number of iterations that are to be executed in order to complete that job. This job specification may also include a specific completion time period that is associated with a particular iteration of that job; and the job specification information is also comprised in the data files 14 contained in the data store 12.
[0042] The worker pods 4 then each begin to run the training jobs that they have pulled from the queue 6, executing the necessary number of iterations for each job. Upon completion of each job iteration, the worker pod 4 outputs in Step 125, to the associated data file 14 in the data store 12, the evaluation results obtained from that iteration of the training job.
[0043] Once a worker pod 4 has begun to run iterations of a model training job that it has accessed from the queue, a few different scenarios may play out. In a first ‘complete success’ scenario, the worker pod 4 is able to complete the job(s) accessed from the queue 6 successfully—i.e., all of the required iterations to be executed for that training job are completed within their associated completion time period and the corresponding evaluation results for each iteration (as well as any final output metrics from the job as a whole) are output and written to the corresponding data file 14 contained in the data store 12. In this scenario, the worker pod 4 may also be configured to provide confirmation to the queue 6 (maintained by the master node 2) that the job(s) which have been allocated to that worker pod 4 have been completed. These completed jobs are then marked or otherwise indicated in the queue 6 as having been completed, and that these jobs do not need to be accessed by any other worker pods. This process is carried out in Step 130. Subsequently, the worker pod 4 is now free to once again to access the queue 6 to determine in Step 135 if there are any jobs remaining that have not been allocated to other worker pods 4, and which also match the computing resources of the accessing worker pod 4 in question. If any free (unallocated) jobs remain, the worker pod 4 repeats Steps 120 to 130: pulling the job(s) that can be handled using the memory and computer processing capability available; running iterations of that job; and writing the evaluation results of each iteration to the corresponding data file 14 in the data store 12.
[0044] However, if there are no longer any unallocated jobs remaining in the queue, the accessing worker pod 4 in question is considered to have completed its tasks and to be no longer required. After all of the worker pods 4 have completed their jobs, and there remain no outstanding jobs in the queue 6, the experiment as a whole is considered complete. The evaluation results and output metrics that have been written to the individual data files 14 in the data store 12 can then be analysed to ascertain the final outcome of the experiment—in other words, to identify the optimal hyperparameter settings that should be used when applying that machine learning model to test data for different implementation purposes.
[0045] The above-described cluster computing infrastructure, the application processing system implemented thereby and its corresponding method of hyperparameter tuning for machine learning models has multiple associated advantages in relation to maximising efficiency of computing resource use whilst minimising processing time and load on any given computing entity. The parallelisation provided by the cluster computing architecture and the application processing system architecture allows multiple different model training jobs to be implemented and executed simultaneously by allocating the amount of work required for these training jobs appropriately based on the processing and storage capacity of each of the worker pods 4. Nevertheless, the queue of jobs that are to be completed and the tuning experiment as whole can still be managed and controlled centrally by the user via the master pod 2. This parallelised computer processing mechanism is particularly suited to and advantageous for the implementation of hyperparameter search algorithms that are stateless—i.e., where the knowledge of results from previous hyperparameter testing is not required to operate—such as random searches and grid searches. Multiple random or grid search settings can be executed independently from one another using the parallelised approach of the application level processing system described herein.
[0046] It will however be appreciated that the first ‘complete success’ scenario in which every worker pod 4 completes all its allocated jobs successfully, is effectively an ideal (theoretical) scenario. In practice, there will usually be one or more worker pods 4 that are not able to complete all of their allocated jobs successfully, but will instead ‘crash’ and fail to complete some or all of their allocated job(s). The details of such a scenario, and in particular how crashes/failures/faults of the worker pods are handled by the application processing system of
[0047] As shown in
[0048] As mentioned previously, iterations of a training job have an associated completion time period—i.e., a predefined time within which it is anticipated that a given iteration should be completed by a worker node operating under normal conditions—and completion of all of the training job iterations within their associated completion time periods results in successful completion of the overall job itself. However, if it is determined in Step 215 that the worker pod 4 is unable to complete any iteration of its allocated model training job within the corresponding completion time period, the job will be returned to the queue and may then be subsequently re-allocated to another worker pod 4. The completion time period may therefore also be referred to as the ‘leasing period’ since it is the time period which determines whether the job remains leased from the queue 6 by the worker pod 4 executing the job, or returned thereto. This return of the job to the queue may occur if the worker pod 4 crashes, or if the availability of its memory or processing resources is decreased or reduced for any reason to the point where the worker pod 4 is unable to complete the job iteration within the corresponding completion time period. In some cases, the leasing periods may be monitored by the master pod 2 (or a sub-component thereof): where it is determined that the leasing period for a particular job has expired, the master pod 2 may access the queue 6 and alter a status of that job such that it may thereafter be accessed by (and allocated to) a different worker pod 4 for completion.
[0049] In this particular illustrated example, the completion time period is associated with a specific iteration of the job—namely, the first iteration that is to be performed in the job—and is arranged to be re-set (by the worker pod 4 itself) upon successful completion of that iteration and to begin ‘running’ again in relation to each subsequent iteration. It will therefore be appreciated that in this case, the completion time period may be defined to correspond to a relatively short period (e.g., Comment for Inventors: Please give some practical examples of such period, or ranges of possible periods). This dynamic update and refresh of the completion time period by the processing worker pod 4 in relation to each subsequent iteration in a particular job is particularly advantageous in relation to fault-handling for the present system. This is because the delay between the point in time where the worker pod 4 crashes (and is unable to continue executing the training job), and the point in time where the job is returned to the queue 6 (and hence can be re-allocated to a new worker pod 4), is minimised. The efficiency with which all the jobs in the experiment can be processed is thereby increased, even when faults or errors develop in one or more of the worker pods in the system.
[0050] It will however be appreciated that the fault-handling process may instead be configured to operate in a slightly different manner. For example, additionally or alternatively, a completion time period may be set in association with the training job as a whole, such that if the entire job (i.e., all of the iterations) is not completed by the worker pod 4 within a particular allotted time, this will be considered by the system to constitute a ‘failure’ of the worker pod 4. The job will therefore be returned to the queue 6 as described earlier and may subsequently be re-allocated to a different worker pod.
[0051] The above fault-handling process comprises some additional aspects that are particularly evident when a worker pod 4 pulls up a previously failed (semi-complete) job from the queue 6.
[0052] As discussed earlier, the worker pod 4 which initially executed some or all of a particular job (hereafter referred to as the ‘original worker node’) would have, as part of its normal job-processing functionality, written the evaluation results obtained for each completed iteration of that training job to the corresponding data file 14 in the data store 12. As a result, this data file 14 contains a record of all of the successfully completed iterations for a given job. After the original worker pod 4 has crashed and the job is returned to the queue 6, the subsequent worker pod 4 to which this job is allocated (hereafter referred to as the ‘new worker node’) will, prior to executing the job, access in Step 230 the corresponding data file 14 for that job from the data store 12. As a result, the new worker pod 4 will be able to identify, prior to executing the job, the last successfully completed iteration for that job; the remaining proportion of the job to be completed is therefore also able to be ascertained in this step. This determination ensures that the new worker pod 4 will then be able to execute only the remaining proportion of the job which the original worker pod 4 was not able to complete. In other words, the new worker node 4 will define, as its job iteration starting point, the iteration number of the last successfully-completed iteration, and then continue to process the job as usual thereafter in the same manner as would have been done in relation to a ‘fresh’ (i.e., not previously allocated or semi-complete) job. This means that the new worker pod 4 will then proceed in Step 235 to complete the remaining iterations of the job, whereby for each successful completed iteration the new worker pod 4 will: output the evaluation results to the corresponding data file 14 in the data store 12; update/re-set the completed time period in relation to the subsequent iteration; repeat these two steps until all iterations are completed; and finally update the queue 6 with a ‘job complete’ status indicator.
[0053] Furthermore, as part of the fault-handling process, when reading the job information from the corresponding data file 14 in the data store 12, the new worker pod 4 is also configured to delete any data or data files that were output to the data store 12 by the original worker pod 4 in respect of subsequent (incorrect or incomplete) iterations of the job in question (i.e., after failure or crash of that original worker pod 4) because these data files will not be representative of the results of the job. Alternatively, it may be possible for the new worker pod 4 to simply overwrite the previously-output incorrect data files (associated with the incomplete iterations) with new data from the subsequent iterations that the new worker pod 4 successfully completes.
[0054] As will be appreciated, the above-described fault-handling mechanism provides multiple advantages. For example, the new worker pod 4 is able to pick up/take-over the execution of any given training job relatively seamlessly from where the original worker pod 4 ceased its processing; any delays in processing time as a result of such faults are thereby minimised, and any duplicated processing on the part of the new worker pod 4 is thereby avoided. The effect on the processing resources of the overall system, as well as on the processing time required for the entire experiment, is therefore minimised if any worker pod fails or crashes over the course of the experiment. In addition, as the new worker pod 4 which picks up the semi-complete job is configured to delete or overwrite any incorrect data files created by the original worker pod 4 in respect of uncompleted iterations, the creation and storage of fault artefacts within the system as a whole is thereby reduced (or even avoided completely); corruption of subsequent evaluation results by the incorrect data from an incomplete iteration is also prevented. Furthermore, fault-handling in this manner helps to ensure idempotence of the system—running the system with the same input parameters and settings should produce the same or corresponding evaluation results.
[0055] Many modifications may be made to the above examples without departing from the scope of the present disclosure as defined in the accompanying claims. For example, it will be appreciated that although the data store 12 is shown as being separate from (but in operative communication with) the master pod 2, the master pod 2 may in fact comprise the data store 12. Furthermore, each of the worker pod 4 may comprise their own individual data stores to which the evaluation data is initially written when executing the training jobs. The data from the worker pod individual data stores may then be periodically written to the main data store 12, for example after a certain number of iterations, or after success/failure of the entire job.