PREDICTION OF DISEASE STATUS
20220285027 · 2022-09-08
Assignee
Inventors
- Christian GOSSENS (Basel, CH)
- Florian Lipsmeier (Basel, CH)
- Cedric Andre Marie Vincent Geoffrey SIMILLION (Lutzelfluh-Goldbach, CH)
- Michael LINDEMANN (Schopfheim, DE)
Cpc classification
G06N7/01
PHYSICS
A61B5/4082
HUMAN NECESSITIES
A61B5/4088
HUMAN NECESSITIES
G16H50/70
PHYSICS
G16H50/20
PHYSICS
International classification
Abstract
A machine learning system (110) for determining at least one analysis model for predicting at least one target variable indicative of a disease status is proposed. The machine learning system (110) comprises: at least one communication interface (114) configured for receiving input data, wherein the input data comprises a set of historical digital biomarker feature data, wherein the set of historical digital biomarker feature data comprises a plurality of measured values indicative of the disease status to be predicted; at least one model unit (116) comprising at least one machine learning model comprising at least one algorithm; at least one processing unit (112), wherein the processing unit (112) is configured for determining at least one training data set and at least one test data set from the input data set, wherein the processing unit (112) is configured for determining the analysis model by training the machine learning model with the training data set, wherein the processing unit (112) is configured for predicting the target variable on the test data set using the determined analysis model, wherein the processing unit (112) is configured for determining performance of the determined analysis model based on the predicted target variable and a true value of the target variable of the test data set.
Claims
1. A machine learning system (110) for determining at least one analysis model for predicting at least one target variable indicative of a disease status comprising: at least one communication interface (114) configured for receiving input data, wherein the input data comprises a set of historical digital biomarker feature data, wherein the set of historical digital biomarker feature data comprises a plurality of measured values indicative of the disease status to be predicted, wherein the historical digital biomarker feature data is experimental data determined by at least one mobile device which comprises a plurality of different measurement values per subject relating to symptoms of the disease, wherein the input data is determined in an active test using the mobile device such as at least one cognition test and/or at least one hand motor function test and/or or at least one mobility test; at least one model unit (116) comprising at least one machine learning model comprising at least one algorithm; at least one processing unit (112), wherein the processing unit (112) is configured for determining at least one training data set and at least one test data set from the input data set, wherein the processing unit (112) is configured for determining the analysis model by training the machine learning model with the training data set, wherein the training is a process of determining parameters of the algorithm of machine learning model on the training data set, wherein the training is performed iteratively on the training data sets of different subjects, wherein the analysis model is a regression model, wherein the algorithm of the machine learning model is at least one algorithm selected from the group consisting of: k nearest neighbors (kNN); linear regression; partial last-squares (PLS); random forest (RF); and extremely randomized Trees (XT), or wherein the analysis model is a classification model, wherein the algorithm of the machine learning model is at least one algorithm selected from the group consisting of: k nearest neighbors (kNN); support vector machines (SVM); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA); naïve Bayes (NB); random forest (RF); and extremely randomized Trees (XT), wherein the processing unit (112) is configured for predicting the target variable on the test data set using the determined analysis model, wherein the processing unit (112) is configured for determining performance of the determined analysis model based on the predicted target variable and a true value of the target variable of the test data set, wherein the machine learning system (110) comprises at least one output interface (118), wherein the output interface (118) is configured for providing at least one output, wherein the output comprises at least one information about the performance of the determined analysis model, wherein the information about the performance of the determined analysis model comprises one or more of at least one scoring chart, at least one predictions plot, at least one correlations plot, and at least one residuals plot, wherein the model unit (116) comprises a plurality of machine learning models, wherein the machine learning models are distinguished by their algorithm, wherein the processing unit (112) is configured for determining an analysis model for each of the machine learning models by training the respective machine learning model with the training data set and for predicting the target variables on the test data set using the determined analysis models, wherein the processing unit (112) is configured for determining performance of each of the determined analysis models based on the predicted target variables and the true value of the target variable of the test data set, wherein the processing unit (112) is configured for determining the analysis model having the best performance.
2. The machine learning system (110) of claim 1, wherein the disease whose status is to be predicted is multiple sclerosis and the target variable is an expanded disability status scale (EDSS) value, or wherein the disease whose status is to be predicted is spinal muscular atrophy and the target variable is a forced vital capacity (FVC) value, or wherein the disease whose status is to be predicted is Huntington's disease and the target variable is a total motor score (TMS) value.
3. The machine learning system (110) of claim 1, wherein the processing unit (112) is configured for generating and/or creating per subject of the input data a training data set and a test data set, wherein the test data set comprises data of one subject, wherein the training data set comprises the other input data.
4. The machine learning system (110) of claim 1, wherein the processing unit (112) is configured for extracting features from the input data, wherein the processing unit (112) is configured for ranking the features by using a maximum-relevance-minimum-redundancy technique.
5. The machine learning system (110) of claim 4, wherein the processing unit (112) is configured for considering different numbers of features for determining the analysis model by training the machine learning model with the training data set.
6. The machine learning system (110) of claim 1, wherein the processing unit (112) is configured for pre-processing the input data, wherein the pre-processing comprises at least one filtering process for input data fulfilling at least one quality criterion.
7. The machine learning system (110) of claim 1, wherein the processing unit (112) is configured for performing one or more of at least one stabilizing transformation; at least one aggregation; and at least one normalization for the training data set and for the test data set.
8. A computer-implemented method for determining at least one analysis model for predicting at least one target variable indicative of a disease status, using the machine learning system (110) of claim 1, wherein the method comprises the following steps: a) receiving input data via at least one communication interface (114), wherein the input data comprises a set of historical digital biomarker feature data, wherein the set historical digital biomarker feature data comprises a plurality of measured values indicative of the disease status to be predicted; at least one processing unit (112): b) determining at least one training data set and at least one test data set from the input data set; c) determining the analysis model by training a machine learning model comprising at least one algorithm with the training data set; d) predicting the target variable on the test data set using the determined analysis model; e) determining performance of the determined analysis model based on the predicted target variable and a true value of the target variable of the test data set.
9. The method of claim 8, wherein in step c) a plurality of analysis models is determined by training a plurality of machine learning models with the training data set, wherein the machine learning models are distinguished by their algorithm, wherein in step d) a plurality of target variables is predicted on the test data set using the determined analysis models, wherein in step e) the performance of each of the determined analysis models is determined based on the predicted target variables and the true value of the target variable of the test data set, wherein the method further comprises determining the analysis model having the best performance.
10. Computer program for determining at least one analysis model for predicting at least one target variable indicative of a disease status, configured for causing a computer or computer network to fully or partially perform the method for determining at least one analysis model for predicting at least one target variable indicative of a disease status as in the method of claim 8, when executed on the computer or computer network, wherein the computer program is configured to perform at least steps b) to e) of the method for determining at least one analysis model for predicting at least one target variable indicative of a disease status according to any one of the preceding claims referring to a method.
11. The machine learning system (110) of claim 1 wherein the machine learning system is for determining an analysis model for predicting one or more of an expanded disability status scale (EDSS) value indicative of multiple sclerosis, a forced vital capacity (FVC) value indicative of spinal muscular atrophy, or a total motor score (TMS) value indicative of Huntington's disease.
Description
BRIEF DESCRIPTION OF THE FIGURES
[0152] Further optional features and embodiments will be disclosed in more detail in the subsequent description of embodiments, preferably in conjunction with the dependent claims. Therein, the respective optional features may be realized in an isolated fashion as well as in any arbitrary feasible combination, as the skilled person will realize. The scope of the invention is not restricted by the preferred embodiments. The embodiments are schematically depicted in the Figures. Therein, identical reference numbers in these Figures refer to identical or functionally comparable elements.
[0153] In the Figures:
[0154]
[0155]
[0156]
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0157]
[0158] The analysis model may be a mathematical model configured for predicting at least one target variable for at least one state variable. The analysis model may be a regression model or a classification model. The regression model may be an analysis model comprising at least one supervised learning algorithm having as output a numerical value within a range. The classification model may be an analysis model comprising at least one supervised learning algorithm having as output a classifier such as “ill” or “healthy”.
[0159] The target variable value which is to be predicted may dependent on the disease whose presence or status is to be predicted. The target variable may be either numerical or categorical. For example, the target variable may be categorical and may be “positive” in case of presence of disease or “negative” in case of absence of the disease. The disease status may be a health condition and/or a medical condition and/or a disease stage. For example, the disease status may be healthy or ill and/or presence or absence of disease. For example, the disease status may be a value relating to a scale indicative of disease stage. The target variable may be numerical such as at least one value and/or scale value. The target variable may directly relate to the disease status and/or may indirectly relate to the disease status. For example, the target variable may need further analysis and/or processing for deriving the disease status. For example, the target variable may be a value which need to be compared to a table and/or lookup table for determine the disease status.
[0160] The machine learning system 110 comprises at least one processing unit 112 such as a processor, microprocessor, or computer system configured for machine learning, in particular for executing a logic in a given algorithm. The machine learning system 110 may be configured for performing and/or executing at least one machine learning algorithm, wherein the machine learning algorithm is configured for building the at least one analysis model based on the training data. The processing unit 112 may comprise at least one processor. In particular, the processing unit 112 may be configured for processing basic instructions that drive the computer or system. As an example, the processing unit 112 may comprise at least one arithmetic logic unit (ALU), at least one floating-point unit (FPU), such as a math coprocessor or a numeric coprocessor, a plurality of registers and a memory, such as a cache memory. In particular, the processing unit 112 may be a multi-core processor. The processing unit 112 may be configured for machine learning. The processing unit 112 may comprise a Central Processing Unit (CPU) and/or one or more Graphics Processing Units (GPUs) and/or one or more Application Specific Integrated Circuits (ASICs) and/or one or more Tensor Processing Units (TPUs) and/or one or more field-programmable gate arrays (FPGAs) or the like.
[0161] The machine learning system comprises at least one communication interface 114 configured for receiving input data. The communication interface 114 may be configured for transferring information from a computational device, e.g. a computer, such as to send or output information, e.g. onto another device. Additionally or alternatively, the communication interface 114 may be configured for transferring information onto a computational device, e.g. onto a computer, such as to receive information. The communication interface 114 may specifically provide means for transferring or exchanging information. In particular, the communication interface 114 may provide a data transfer connection, e.g. Bluetooth, NFC, inductive coupling or the like. As an example, the communication interface 114 may be or may comprise at least one port comprising one or more of a network or internet port, a USB-port and a disk drive. The communication interface 114 may be at least one web interface.
[0162] The input data comprises a set of historical digital biomarker feature data, wherein the set of historical digital biomarker feature data comprises a plurality of measured values indicative of the disease status to be predicted. The set of historical digital biomarker feature data comprises a plurality of measured values per subject indicative of the disease status to be predicted. For example, for model building for predicting at least one target indicative of multiple sclerosis the digital biomarker feature data may be data from the Floodlight POC study. For example, for model building for predicting at least one target indicative of spinal muscular atrophy the digital biomarker feature data may be data from the OLEOS study. For example, for model building for predicting at least one target indicative of Huntington's disease the digital biomarker feature data may be data from the HD OLE study, ISIS 44319-CS2. The input data may be determined in at least one active test and/or in at least one passive monitoring. For example, the input data may be determined in an active test using at least one mobile device such as at least one cognition test and/or at least one hand motor function test and/or or at least one mobility test.
[0163] The input data further may comprise target data. The target data comprises clinical values to predict, in particular one clinical value per subject. The target data may be either numerical or categorical. The clinical value may directly or indirectly refer to the status of the disease.
[0164] The processing unit 112 may be configured for extracting features from the input data. The extracting of features may comprise one or more of data aggregation, data reduction, data transformation and the like. The processing unit 112 may be configured for ranking the features. For example, the features may be ranked with respect to their relevance, i.e. with respect to correlation with the target variable, and/or the features may be ranked with respect to redundancy, i.e. with respect to correlation between features. The processing unit 110 may be configured for ranking the features by using a maximum-relevance-minimum-redundancy technique. This method ranks all features using a trade-off between relevance and redundancy. Specifically, the feature selection and ranking may be performed as described in Ding C., Peng H. “Minimum redundancy feature selection from microarray gene expression data”, J Bioinform Comput Biol. 2005 April; 3 (2):185-205, PubMed PMID:15852500. The feature selection and ranking may be performed by using a modified method compared to the method described in Ding et al. The maximum correlation coefficient may be used rather than the mean correlation coefficient and an addition transformation may be applied to it. In case of a regression model as analysis model the transformation the value of the mean correlation coefficient may be raised to the 5th power. In case of a classification model as analysis model the value of the mean correlation coefficient may be multiplied by 10.
[0165] The machine learning system 110 comprises at least one model unit 116 comprising at least one machine learning model comprising at least one algorithm. The model unit 116 may comprise a plurality of machine learning models, e.g. different machine learning models for building the regression model and machine learning models for building the classification model. For example, the analysis model may be a regression model and the algorithm of the machine learning model may be at least one algorithm selected from the group consisting of: k nearest neighbors (kNN); linear regression; partial last-squares (PLS); random forest (RF); and extremely randomized Trees (XT). For example, the analysis model may be a classification model and the algorithm of the machine learning model may be at least one algorithm selected from the group consisting of: k nearest neighbors (kNN); support vector machines (SVM); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA); naïve Bayes (NB); random forest (RF); and extremely randomized Trees (XT).
[0166] The processing unit 112 may be configured for pre-processing the input data. The pre-processing 112 may comprise at least one filtering process for input data fulfilling at least one quality criterion. For example, the input data may be filtered to remove missing variables.
[0167] For example, the pre-processing may comprise excluding data from subjects with less than a pre-defined minimum number of observations.
[0168] The processing unit 112 is configured for determining at least one training data set and at least one test data set from the input data set. The training data set may comprise a plurality of training data sets. In particular, the training data set comprises a training data set per subject of the input data. The test data set may comprise a plurality of test data sets. In particular, the test data set comprises a test data set per subject of the input data. The processing unit 112 may be configured for generating and/or creating per subject of the input data a training data set and a test data set, wherein the test data set per subject may comprise data only of that subject, whereas the training data set for that subject comprises all other input data.
[0169] The processing unit 112 may be configured for performing at least one data aggregation and/or data transformation on both of the training data set and the test data set for each subject. The transformation and feature ranking steps may be performed without splitting into training data set and test data set. This may allow to enable interference of e.g. important feature from the data. The processing unit 112 may be configured for one or more of at least one stabilizing transformation; at least one aggregation; and at least one normalization for the training data set and for the test data set. For example, the processing unit 112 may be configured for subject-wise data aggregation of both of the training data set and the test data set, wherein a mean value of the features is determined for each subject. For example, the processing unit 112 may be configured for variance stabilization, wherein for each feature at least one variance stabilizing function is applied. The variance stabilizing function may be at least one function selected from the group consisting of: a logistic, which may be used if all values are greater 300 and no values are between 0 and 1; a logit, which may be used if all values are between 0 and 1, inclusive; a sigmoid; a log 10, which may be used if considered when all values≥=0. The processing unit 112 may be configured for transforming values of each feature using each of the variance transformation functions. The processing unit 112 may be configured for evaluating each of the resulting distributions, including the original one, using a certain criterion. In case of a classification model as analysis model, i.e. when the target variable is discrete, said criterion may be to what extent the obtained values are able to separate the different classes. Specifically, the maximum of all class-wise mean silhouette values may be used for this end. In case of a regression model as analysis model, the criterion may be a mean absolute error obtained after regression of values, which were obtained by applying the variance stabilizing function, against the target variable. Using this selection criterion, processing unit 112 may be configured for determining the best possible transformation, if any are better than the original values, on the training data set. The best possible transformation can be subsequently applied to the test data set. For example, the processing unit 112 may be configured for z-score transformation, wherein for each transformed feature the mean and standard deviations are determined on the training data set, wherein these values are used for z-score transformation on both the training data set and the test data set. For example, the processing unit 112 may be configured for performing three data transformation steps on both the training data set and the test data set, wherein the transformation steps comprise: 1. subject-wise data aggregation; 2. variance stabilization; 3. z-score transformation. The processing unit 112 may be configured for determining and/or providing at least one output of the ranking and transformation steps. For example, the output of the ranking and transformation steps may comprise at least one diagnostics plots. The diagnostics plot may comprise at least one principal component analysis (PCA) plot and/or at least one pair plot comparing key statistics related to the ranking procedure.
[0170] The processing unit 112 is configured for determining the analysis model by training the machine learning model with the training data set. The training may comprise at least one optimization or tuning process, wherein a best parameter combination is determined. The training may be performed iteratively on the training data sets of different subjects. The processing unit 112 may be configured for considering different numbers of features for determining the analysis model by training the machine learning model with the training data set. The algorithm of the machine learning model may be applied to the training data set using a different number of features, e.g. depending on their ranking. The training may comprise n-fold cross validation to get a robust estimate of the model parameters. The training of the machine learning model may comprise at least one controlled learning process, wherein at least one hyper-parameter is chosen to control the training process. If necessary the training is step is repeated to test different combinations of hyper-parameters.
[0171] In particular subsequent to the training of the machine learning model, the processing unit 112 is configured for predicting the target variable on the test data set using the determined analysis model. The processing unit 112 may be configured for predicting the target variable for each subject based on the test data set of that subject using the determined analysis model. The processing unit 112 may be configured for predicting the target variable for each subject on the respective training and test data sets using the analysis model. The processing unit 112 may be configured for recording and/or storing both the predicted target variable per subject and the true value of the target variable per subject, for example, in at least one output file.
[0172] The processing unit 112 is configured for determining performance of the determined analysis model based on the predicted target variable and the true value of the target variable of the test data set. The performance may be characterized by deviations between predicted target variable and true value of the target variable. The machine learning system 110 may comprises at least one output interface 118. The output interface 118 may be designed identical to the communication interface 114 and/or may be formed integral with the communication interface 114. The output interface 118 may be configured for providing at least one output. The output may comprise at least one information about the performance of the determined analysis model. The information about the performance of the determined analysis model may comprises one or more of at least one scoring chart, at least one predictions plot, at least one correlations plot, and at least one residuals plot.
[0173] The model unit 116 may comprise a plurality of machine learning models, wherein the machine learning models are distinguished by their algorithm. For example, for building a regression model the model unit 116 may comprise the following algorithms k nearest neighbors (kNN), linear regression, partial last-squares (PLS), random forest (RF), and extremely randomized Trees (XT). For example, for building a classification model the model unit 116 may comprise the following algorithms k nearest neighbors (kNN), support vector machines (SVM), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), naïve Bayes (NB), random forest (RF), and extremely randomized Trees (XT). The processing unit 112 may be configured for determining an analysis model for each of the machine learning models by training the respective machine learning model with the training data set and for predicting the target variables on the test data set using the determined analysis models.
[0174]
[0175]
[0176]
[0177] In the prospective pilot study (FLOODLIGHT) the feasibility of conducting remote patient monitoring with the use of digital technology in patients with multiple sclerosis was evaluated. A study population was selected by using the following inclusion and exclusion criteria:
[0178] Key inclusion criteria:
[0179] Signed informed consent form
[0180] Able to comply with the study protocol, in the investigator's judgment
[0181] Age 18-55 years, inclusive
[0182] Have a definite diagnosis of MS, confirmed as per the revised McDonald 2010 criteria
[0183] EDSS score of 0.0 to 5.5, inclusive
[0184] Weight: 45-110 kg
[0185] For women of childbearing potential: Agreement to use an acceptable birth control method during the study period
[0186] Key exclusion criteria:
[0187] Severely ill and unstable patients as per investigator's discretion
[0188] Change in dosing regimen or switch of disease modifying therapy (DMT) in the last 12 weeks prior to enrollment
[0189] Pregnant or lactating, or intending to become pregnant during the study
[0190] It is a primary objective of this study to show adherence to smartphone and smartwatch-based assessments quantified as compliance level (%) and to obtain feedback from patients and healthy controls on the smartphone and smartwatch schedule of assessments and the impact on their daily activities using a satisfaction questionnaire. Furthermore, additional objectives are addressed, in particular, the association between assessments conducted using the Floodlight Test and conventional MS clinical outcomes was determined, it was established if Floodlight measures can be used as a marker for disease activity/progression and are associated with changes in MRI and clinical outcomes over time and it was determined if the Floodlight Test Battery can differentiate between patients with and without MS, and between phenotypes in patients with MS.
[0191] In addition to the active tests and passive monitoring, the following assessments were performed at each scheduled clinic visit: [0192] Oral Version of SDMT [0193] Fatigue Scale for Motor and Cognitive Functions (FSMC) [0194] Timed 25-Foot Walk Test (T25-FW) [0195] Berg Balance Scale (BBS) [0196] 9-Hole Peg Test (9HPT) [0197] Patient Health Questionnaire (PHQ-9) [0198] Patients with MS only: [0199] Brain MRI (MSmetrix) [0200] Expanded Disability Status Scale (EDSS) [0201] Patient Determined Disease Steps (PDDS) [0202] Pen and paper version of MSIS-29
[0203] While performing in-clinic tests, patients and healthy controls were asked to carry/wear smartphone and smartwatch to collect sensor data along with in-clinic measures. In summary, the results of the study showed that patients are highly engaged with the smartphone- and smartwatch-based assessments. Moreover, there is a correlation between tests and in-clinic clinical outcome measures recorded at baseline which suggests that the smartphone-based Floodlight Test Battery shall become a powerful tool to continuously monitor MS in a real-world scenario. Further, the smartphone-based measurement of turning speed while walking and performing U-turns appeared to correlate with EDSS.
[0204] For
TABLE-US-00001 feature test Description of feature rank logistic Passive Average per-step power coefficient 1 step_power_mean Monitoring (integral of variance in accelerometer (40-60 s) radius over per-step time span) for gait bouts spanning 40-60 s sigmoid turns_utt U-TURN Number of turns 2 log10 Gc_0_15 SDMT Mean Timegap between correct 3 responses from time 0 to 15 seconds sigmoid U-TURN maximum turn speed 4 turn_speed_max_utt logistic 2MWT Average per-step power coefficient 5 step_power_mean (integral of variance in accelerometer radius over per-step time span) sigmoid U-TURN minimum turn speed 6 turn_speed_min_utt sigmoid Passive Variance of per-step power coefficient 7 step_power_variance Monitoring for gait bouts spanning 60-90 s (60-90 s) logistic Passive Variance of per-step power coefficient 8 step_power_variance Monitoring for gait bouts spanning 40-60 s (40-60 s) sigmoid Passive Average per-step power coefficient 9 step_power_mean Monitoring (integral of variance in accelerometer (<20 s) radius over per-step time span) for gait bouts spanning <20 s span_duration_s_median_utt U-TURN median gait bout length 10 logistic Passive Variance of per-step power coefficient 11 step_power_variance Monitoring for gait bouts spanning 20-40 s (20-40 s) sigmoid Passive Variance of per-step power coefficient 12 step_power_variance Monitoring for gait bouts spanning 90-120 s (90-120 s) sigmoid U-TURN median turn speed 13 turn_speed_median_utt logistic Passive Average per-step power coefficient 14 step_power_mean Monitoring (integral of variance in accelerometer (60-90 s) radius over per-step time span) for gait bouts spanning 60-90 s sigmoid GcM_0_15 SDMT Maximal Timegap between correct 15 responses from time 0 to 15 seconds logistic Passive Average per-step power coefficient 16 step_power_mean Monitoring (integral of variance in accelerometer (20-40 s) radius over per-step time span) for gait bouts spanning 20-40 s logistic Passive Average per-step power coefficient 17 step_power_mean Monitoring (integral of variance in accelerometer (90-120 s) radius over per-step time span) for gait bouts spanning 90-120 s CCR_0_45 SDMT from time 0 to 45 seconds: Number of 18 correct responses within the longest sequence of overall consecutive correct responses span_duration_s_max_utt U-TURN maximum gait bout length 19 log10 R_Symbol_9 SDMT Number of total responses for symbol 20 9: “.—” Gc_0_30 SDMT Mean Timegap between correct 21 responses from time 0 to 30 seconds sigmoid CCR_0_15 SDMT from time 0 to 15 seconds: Number of 22 correct responses within the longest sequence of overall consecutive correct responses sigmoid GM_0_15 SDMT Maximal Timegap between responses 23 from time 0 to 15 seconds sigmoid R_0_15 SDMT Number of total responses from time 0 24 to 15 seconds log10 CR_Symbol_8 SDMT Number of correct responses for 25 symbol 8: “)” log10 CCR_0_30 SDMT from time 0 to 30 seconds: Number of 26 correct responses within the longest sequence of overall consecutive correct responses log10 G_0_15 SDMT Mean Timegap between responses 27 from time 0 to 15 seconds sigmoid CR_0_15 SDMT Number of correct responses from 28 time 0 to 15 seconds log10 Gc_0_45 SDMT Mean Timegap between correct 29 responses from time 0 to 45 seconds log10 R_Symbol_8 SDMT Number of total responses for symbol 30 8: “)” log10 R_0_30 SDMT Number of total responses from time 0 31 to 30 seconds sigmoid CR_0_30 SDMT Number of correct responses from 32 time 0 to 30 seconds
[0205]
[0206] The following gives more detailed description of the tests. The tests are typically computer-implemented on a data acquisition device such as a mobile device as specified elsewhere herein.
[0207] (1) Tests for Passive Monitoring of Gait and Posture: Passive Monitoring
[0208] The mobile device is, typically, adapted for performing or acquiring data from passive monitoring of all or a subset of activities In particular, the passive monitoring shall encompass monitoring one or more activities performed during a predefined window, such as one or more days or one or more weeks, selected from the group consisting of: measurements of gait, the amount of movement in daily routines in general, the types of movement in daily routines, general mobility in daily living and changes in moving behavior.
[0209] Typical passive monitoring performance parameters of interest: [0210] a. frequency and/or velocity of walking; [0211] b. amount, ability and/or velocity to stand up/sit down, stand still and balance [0212] c. number of visited locations as an indicator of general mobility; [0213] d. types of locations visited as an indicator of moving behavior.
[0214] (2) Test for Cognitive Capabilities: SMDT (Also Denoted as eSDMT)
[0215] The mobile device is also, typically, adapted for performing or acquiring a data from an computer-implemented Symbol Digit Modalities Test (eSDMT). The conventional paper SDMT version of the test consists of a sequence of 120 symbols to be displayed in a maximum 90 seconds and a reference key legend (3 versions are available) with 9 symbols in a given order and their respective matching digits from 1 to 9. The smartphone-based eSDMT is meant to be self-administered by patients and will use a sequence of symbols, typically, the same sequence of 110 symbols, and a random alternation (form one test to the next) between reference key legends, typically, the 3 reference key legends, of the paper/oral version of SDMT. The eSDMT similarly to the paper/oral version measures the speed (number of correct paired responses) to pair abstract symbols with specific digits in a predetermined time window, such as 90 seconds time. The test is, typically, performed weekly but could alternatively be performed at higher (e.g. daily) or lower (e.g. bi-weekly) frequency. The test could also alternatively encompass more than 110 symbols and more and/or evolutionary versions of reference key legends. The symbol sequence could also be administered randomly or according to any other modified pre-specified sequence.
[0216] Typical eSDMT performance parameters of interest: [0217] 1. Number of correct responses [0218] a. Total number of overall correct responses (CR) in 90 seconds (similar to oral/paper SDMT) [0219] b. Number of correct responses from time 0 to 30 seconds (CR.sub.0-30) [0220] c. Number of correct responses from time 30 to 60 seconds (CR.sub.30-60) [0221] d. Number of correct responses from time 60 to 90 seconds (CR.sub.60-90) [0222] e. Number of correct responses from time 0 to 45 seconds (CR.sub.0-45) [0223] f Number of correct responses from time 45 to 90 seconds (CR.sub.45-90) [0224] g. Number of correct responses from time i to j seconds (CR.sub.i-j), where i, j are between 1 and 90 seconds and i<j. [0225] 2. Number of errors [0226] a. Total number of errors (E) in 90 seconds [0227] b. Number of errors from time 0 to 30 seconds (E.sub.0-30) [0228] c. Number of errors from time 30 to 60 seconds (E.sub.30-60) [0229] d. Number of errors from time 60 to 90 seconds (E.sub.60-90) [0230] e. Number of errors from time 0 to 45 seconds (E.sub.0-45) [0231] f Number of errors from time 45 to 90 seconds (E.sub.45-90) [0232] g. Number of errors from time i to j seconds (E.sub.i-j), where i,j are between 1 and 90 seconds and i<j. [0233] 3. Number of responses [0234] a. Total number of overall responses (R) in 90 seconds [0235] b. Number of responses from time 0 to 30 seconds (R.sub.0-30) [0236] c. Number of responses from time 30 to 60 seconds (R.sub.30-60) [0237] d. Number of responses from time 60 to 90 seconds (R.sub.60-90) [0238] e. Number of responses from time 0 to 45 seconds (R.sub.0-45) [0239] f Number of responses from time 45 to 90 seconds (R.sub.45-90) [0240] 4. Accuracy rate [0241] a. Mean accuracy rate (AR) over 90 seconds: AR=CR/R [0242] b. Mean accuracy rate (AR) from time 0 to 30 seconds: AR.sub.0-30=CR.sub.0-30/R.sub.0-30 [0243] c. Mean accuracy rate (AR) from time 30 to 60 seconds: AR.sub.30-60=CR.sub.30-60/R.sub.30-60 [0244] d. Mean accuracy rate (AR) from time 60 to 90 seconds: AR.sub.60-90=CR.sub.60-90/R.sub.60-90 [0245] e. Mean accuracy rate (AR) from time 0 to 45 seconds: AR.sub.0-45=CR.sub.0-45/R.sub.0-45 [0246] f. Mean accuracy rate (AR) from time 45 to 90 seconds: AR.sub.45-90=CR.sub.45-90/R.sub.45-90 [0247] 5. End of task fatigability indices [0248] a. Speed Fatigability Index (SFI) in last 30 seconds: SFI.sub.60-90=CR.sub.60-90/max (CR.sub.0-30, CR.sub.30-60) [0249] b. SFI in last 45 seconds: SFI.sub.45-90=CR.sub.45-90/CR.sub.0-45 [0250] c. Accuracy Fatigability Index (AFI) in last 30 seconds: AFI.sub.60-90=AR.sub.60-90/max (AR.sub.0-30, AR.sub.30-60) [0251] d. AFI in last 45 seconds: AFI.sub.45-90=AR.sub.45-90/AR.sub.0-45 [0252] 6. Longest sequence of consecutive correct responses [0253] a. Number of correct responses within the longest sequence of overall consecutive correct responses (CCR) in 90 seconds [0254] b. Number of correct responses within the longest sequence of consecutive correct responses from time 0 to 30 seconds (CCR.sub.0-30) [0255] c. Number of correct responses within the longest sequence of consecutive correct responses from time 30 to 60 seconds (CCR.sub.30-60) [0256] d. Number of correct responses within the longest sequence of consecutive correct responses from time 60 to 90 seconds (CCR.sub.60-90) [0257] e. Number of correct responses within the longest sequence of consecutive correct responses from time 0 to 45 seconds (CCR.sub.0-45) [0258] f. Number of correct responses within the longest sequence of consecutive correct responses from time 45 to 90 seconds (CCR.sub.45-90) [0259] 7. Time gap between responses [0260] a. Continuous variable analysis of gap (G) time between two successive responses [0261] b. Maximal gap (GM) time elapsed between two successive responses over 90 seconds [0262] c. Maximal gap time elapsed between two successive responses from time 0 to 30 seconds (GM.sub.0-30) [0263] d. Maximal gap time elapsed between two successive responses from time 30 to 60 seconds (GM.sub.30-60) [0264] e. Maximal gap time elapsed between two successive responses from time 60 to 90 seconds (GM.sub.60-90) [0265] f. Maximal gap time elapsed between two successive responses from time 0 to 45 seconds (GM.sub.0-45) [0266] g. Maximal gap time elapsed between two successive responses from time 45 to 90 seconds (GM.sub.45-90) [0267] 8. Time Gap between correct responses [0268] a. Continuous variable analysis of gap (Gc) time between two successive correct responses [0269] b. Maximal gap time elapsed between two successive correct responses (GcM) over 90 seconds [0270] c. Maximal gap time elapsed between two successive correct responses from time 0 to 30 seconds (GcM.sub.0-30) [0271] d. Maximal gap time elapsed between two successive correct responses from time 30 to 60 seconds (GcM.sub.30-60) [0272] e. Maximal gap time elapsed between two successive correct responses from time 60 to 90 seconds (GcM.sub.60-90) [0273] f. Maximal gap time elapsed between two successive correct responses from time 0 to 45 seconds (GcM.sub.0-45) [0274] g. Maximal gap time elapsed between two successive correct responses from time 45 to 90 seconds (GcM.sub.45-90) [0275] 9. Fine finger motor skill function parameters captured during eSDMT [0276] a. Continuous variable analysis of duration of touchscreen contacts (Tts), deviation between touchscreen contacts (Dts) and center of closest target digit key, and mistyped touchscreen contacts (Mts) (i.e contacts not triggering key hit or triggering key hit but associated with secondary sliding on screen), while typing responses over 90 seconds [0277] b. Respective variables by epochs from time 0 to 30 seconds: Tts.sub.0-30, Dts.sub.0-30, MtS.sub.0-30 [0278] c. Respective variables by epochs from time 30 to 60 seconds: Tts.sub.30-60, Dts.sub.30-60, MtS.sub.30-60 [0279] d. Respective variables by epochs from time 60 to 90 seconds: Tts.sub.60-90, Dts.sub.60-90, Mts.sub.60-90 [0280] e. Respective variables by epochs from time 0 to 45 seconds: Tts.sub.0-45, Dts.sub.0-45, Mts.sub.0-45 [0281] f. Respective variables by epochs from time 45 to 90 seconds: Tts.sub.45-90, Dts.sub.45-90, Mts.sub.45-90 [0282] 10. Symbol-specific analysis of performances by single symbol or cluster of symbols [0283] a. CR for each of the 9 symbols individually and all their possible clustered combinations [0284] b. AR for each of the 9 symbols individually and all their possible clustered combinations [0285] c. Gap time (G) from prior response to recorded responses for each of the 9 symbols individually and all their possible clustered combinations [0286] d. Pattern analysis to recognize preferential incorrect responses by exploring the type of mistaken substitutions for the 9 symbols individually and the 9 digit responses individually. [0287] 11. Learning and cognitive reserve analysis [0288] a. Change from baseline (baseline defined as the mean performance from the first 2 administrations of the test) in CR (overall and symbol-specific as described in #9) between successive administrations of eSDMT [0289] b. Change from baseline (baseline defined as the mean performance from the first 2 administrations of the test) in AR (overall and symbol-specific as described in #9) between successive administrations of eSDMT [0290] c. Change from baseline (baseline defined as the mean performance from the first 2 administrations of the test) in mean G and GM (overall and symbol-specific as described in #9) between successive administrations of eSDMT [0291] d. Change from baseline (baseline defined as the mean performance from the first 2 administrations of the test) in mean Gc and GcM (overall and symbol-specific as described in #9) between successive administrations of eSDMT [0292] e. Change from baseline (baseline defined as the mean performance from the first 2 administrations of the test) in SFI.sub.60-90 and SFI.sub.45-90 between successive administrations of eSDMT [0293] f. Change from baseline (baseline defined as the mean performance from the first 2 administrations of the test) in AFI.sub.60-90 and AFI.sub.45-90 between successive administrations of eSDMT [0294] g. Change from baseline (baseline defined as the mean performance from the first 2 administrations of the test) in Tts between successive administrations of eSDMT [0295] h. Change from baseline (baseline defined as the mean performance from the first 2 administrations of the test) in Dts between successive administrations of eSDMT [0296] i. Change from baseline (baseline defined as the mean performance from the first 2 administrations of the test) in Mts between successive administrations of eSDMT.
[0297] (3) Tests for Active Gait and Posture Capabilities: U-Turn Test (Also Denoted as Five U-Turn Test, 5UTT) and 2MWT
[0298] A sensor-based (e.g. accelerometer, gyroscope, magnetometer, global positioning system [GPS]) and computer implemented test for measures of ambulation performances and gait and stride dynamics, in particular, the 2-Minute Walking Test (2MWT) and the Five U-Turn Test (5UTT).
[0299] In one embodiment, the mobile device is adapted to perform or acquire data from the TwoMinute Walking Test (2MWT). The aim of this test is to assess difficulties, fatigability or unusual patterns in long-distance walking by capturing gait features in a two-minute walk test (2MWT). Data will be captured from the mobile device. A decrease of stride and step length, increase in stride duration, increase in step duration and asymmetry and less periodic strides and steps may be observed in case of disability progression or emerging relapse. Arm swing dynamic while walking will also be assessed via the mobile device. The subject will be instructed to “walk as fast and as long as you can for 2 minutes but walk safely”. The 2MWT is a simple test that is required to be performed indoor or outdoor, on an even ground in a place where patients have identified they could walk straight for as far as ≥200 meters without U-turns. Subjects are allowed to wear regular footwear and an assistive device and/or orthotic as needed. The test is typically performed daily.
[0300] Typical 2MWT performance parameters of particular interest: [0301] 1. Surrogate of walking speed and spasticity: [0302] a. Total number of steps detected in, e.g., 2 minutes (ΣS) [0303] b. Total number of rest stops if any detected in 2 minutes (ΣRs) [0304] c. Continuous variable analysis of walking step time (WsT) duration throughout the 2MWT [0305] d. Continuous variable analysis of walking step velocity (WsV) throughout the 2MWT (step/second) [0306] e. Step asymmetry rate throughout the 2MWT (mean difference of step duration between one step to the next divided by mean step duration): SAR=meanΔ(WsT.sub.x−WsT.sub.x+1)/(120/ΣS) [0307] f. Total number of steps detected for each epoch of 20 seconds (ΣS.sub.t, t+20) [0308] g. Mean walking step time duration in each epoch of 20 seconds: WsTt.sub., t+20=20/ΣS.sub.t, t+20 [0309] h. Mean walking step velocity in each epoch of 20 seconds: WsV.sub.t, t+20=ΣS.sub.t, t+20/20 [0310] i. Step asymmetry rate in each epoch of 20 seconds: SAR.sub.t, t+20=meanΔ.sub.t, t+20(WsT.sub.x−WsT.sub.x+1)/(20/ΣS.sub.t, t+20) [0311] j. Step length and total distance walked through biomechanical modelling [0312] 2. Walking fatigability indices: [0313] a. Deceleration index: DI=WsV.sub.100-120/max (WsV.sub.0-20, WsV.sub.20-40, WsV.sub.40-60) [0314] b. Asymmetry index: AI=SAR.sub.100-120/min (SAR.sub.0-20, SAR.sub.20-40, SAR.sub.40-60)
[0315] In another embodiment, the mobile device is adapted to perform or acquire data from the Five U-Turn Test (5UTT). The aim of this test is to assess difficulties or unusual patterns in performing U-turns while walking on a short distance at comfortable pace. The 5UTT is required to be performed indoor or outdoor, on an even ground where patients are instructed to “walk safely and perform five successive U-turns going back and forward between two points a few meters apart”. Gait feature data (change in step counts, step duration and asymmetry during U-turns, U-turn duration, turning speed and change in arm swing during U-turns) during this task will be captured by the mobile device. Subjects are allowed to wear regular footwear and an assistive device and/or orthotic as needed. The test is typically performed daily.
[0316] Typical 5UTT performance parameters of interest: [0317] 1. Mean number of steps needed from start to end of complete U-turn (ΣSu) [0318] 2. Mean time needed from start to end of complete U-turn (Tu) [0319] 3. Mean walking step duration: Tsu=Tu/ΣSu [0320] 4. Turn direction (left/right) [0321] 5. Turning speed (degrees/sec)
[0322]
TABLE-US-00002 Performance parameter test description rank lmax_pressure_min Distal Motor The minimum value of each 1 Function test maximum pressure reading (Tap-The- per finger tap Monster) log10 DTA_F Squeeze-A- the mean lag time between 2 Shape first and second fingers touch the screen of failed pinches log10 Voice test Mean absolute difference 3 norm_pct_diff_Mean_MFCCs_9 of successive cycles of the 9.sup.th Mel Frequency Cepstral Coefficient (MFCC) log10 std_Mean_MFCCs_8 Voice test The standard deviation of 4 the mean value of successive cycles of the 8th MFCC logistic fatigue_index Voice test An estimate for vocal 5 fatigue defined as the ratio of max duration of the first half to max duration of the second half log10 DTA_S Squeeze-A- the mean lag time between 6 Shape first and second fingers touch the screen of successful pinches sigmoid LINE_TOP_TO_BOTTOM_errSQRT Draw-A- square root of the drawing 7 Shape error for the line top-to-bottom shape log10 DTA_0_15 Squeeze-A- the mean lag time between 8 Shape first and second fingers touch the screen between time window 0 s-15 s log10 DTA_15_30 Squeeze-A- the mean lag time between 9 Shape first and second fingers touch the screen between time window 15 s-30 s log10 DTA Squeeze-A- DTA = mean(pinch_start − 10 Shape finger_down): the mean lag time between first and second fingers touch the screen
[0323]
[0324] The following gives more detailed description of the tests. The tests are typically computer-implemented on a data acquisition device such as a mobile device as specified elsewhere herein.
[0325] (1) Tests for Central Motor Functions: Draw a Shape Test and Squeeze a Shape Test
[0326] The mobile device may be further adapted for performing or acquiring a data from a further test for distal motor function (so-called “draw a shape test”) configured to measure dexterity and distal weakness of the fingers. The dataset acquired from such test allow identifying the precision of finger movements, pressure profile and speed profile.
[0327] The aim of the “Draw a Shape” test is to assess fine finger control and stroke sequencing. The test is considered to cover the following aspects of impaired hand motor function: tremor and spasticity and impaired hand-eye coordination. The patients are instructed to hold the mobile device in the untested hand and draw on a touchscreen of the mobile device 6 prewritten alternating shapes of increasing complexity (linear, rectangular, circular, sinusoidal, and spiral; vide infra) with the second finger of the tested hand “as fast and as accurately as possible” within a maximum time of for instance 30 seconds. To draw a shape successfully the patient's finger has to slide continuously on the touchscreen and connect indicated start and end points passing through all indicated check points and keeping within the boundaries of the writing path as much as possible. The patient has maximum two attempts to successfully complete each of the 6 shapes. Test will be alternatingly performed with right and left hand. User will be instructed on daily alternation. The two linear shapes have each a specific number “a” of checkpoints to connect, i.e “a-1” segments. The square shape has a specific number “b” of checkpoints to connect, i.e. “b-1” segments. The circular shape has a specific number “c” of checkpoints to connect, i.e. “c-1” segments. The eight-shape has a specific number “d” of checkpoints to connect, i.e “d-1” segments. The spiral shape has a specific number “e” of checkpoints to connect, “e-1” segments. Completing the 6 shapes then implies to draw successfully a total of “(2a+b+c+d+e−6)” segments.
[0328] Typical Draw a Shape test performance parameters of interest:
[0329] Based on shape complexity, the linear and square shapes can be associated with a weighting factor (Wf) of 1, circular and sinusoidal shapes a weighting factor of 2, and the spiral shape a weighting factor of 3. A shape which is successfully completed on the second attempt can be associated with a weighting factor of 0.5. These weighting factors are numerical examples which can be changed in the context of the present invention. [0330] 1. Shape completion performance scores: [0331] a. Number of successfully completed shapes (0 to 6) (ΣSh) per test [0332] b. Number of shapes successfully completed at first attempt (0 to 6) (ΣSh.sub.1) [0333] c. Number of shapes successfully completed at second attempt (0 to 6) (ΣSh.sub.2) [0334] d. Number of failed/uncompleted shapes on all attempts (0 to 12) (ΣF) [0335] e. Shape completion score reflecting the number of successfully completed shapes adjusted with weighting factors for different complexity levels for respective shapes (0 to 10) (Σ[Sh*Wf]) [0336] f. Shape completion score reflecting the number of successfully completed shapes adjusted with weighting factors for different complexity levels for respective shapes and accounting for success at first vs second attempts (0 to 10) (Σ[Sh.sub.1*Wf]+Σ[Sh.sub.2*Wf*0.5]) [0337] g. Shape completion scores as defined in #1e, and #1f may account for speed at test completion if being multiplied by 30/t, where t would represent the time in seconds to complete the test. [0338] h. Overall and first attempt completion rate for each 6 individual shapes based on multiple testing within a certain period of time: (ΣSh.sub.1)/(ΣSh.sub.1+ΣSh.sub.2+ΣF) and (ΣSh.sub.1+ΣSh.sub.2)/(ΣSh.sub.1+ΣSh.sub.2+ΣF). [0339] 2. Segment completion and celerity performance scores/measures: [0340] (analysis based on best of two attempts [highest number of completed segments] for each shape, if applicable) [0341] a. Number of successfully completed segments (0 to [2a+b+c+d+e−6]) (ΣSe) per test [0342] b. Mean celerity ([C], segments/second) of successfully completed segments: C=ΣSe/t, where t would represent the time in seconds to complete the test (max 30 seconds) [0343] c. Segment completion score reflecting the number of successfully completed segments adjusted with weighting factors for different complexity levels for respective shapes (Σ[Se*Wf]) [0344] d. Speed-adjusted and weighted segment completion score (Σ[Se*Wf]*30/t), where t would represent the time in seconds to complete the test. [0345] e. Shape-specific number of successfully completed segments for linear and square shapes (ΣSe.sub.LS) [0346] f. Shape-specific number of successfully completed segments for circular and sinusoidal shapes (ΣSe.sub.CS) [0347] g. Shape-specific number of successfully completed segments for spiral shape (ΣSe.sub.S) [0348] h. Shape-specific mean linear celerity for successfully completed segments performed in linear and square shape testing: C.sub.L=ΣSe.sub.LS/t, where t would represent the cumulative epoch time in seconds elapsed from starting to finishing points of the corresponding successfully completed segments within these specific shapes. [0349] i. Shape-specific mean circular celerity for successfully completed segments performed in circular and sinusoidal shape testing: C.sub.C=ΣSe.sub.CS/t, where t would represent the cumulative epoch time in seconds elapsed from starting to finishing points of the corresponding successfully completed segments within these specific shapes. [0350] j. Shape-specific mean spiral celerity for successfully completed segments performed in the spiral shape testing: C.sub.S=ΣSe.sub.S/t, where t would represent the cumulative epoch time in seconds elapsed from starting to finishing points of the corresponding successfully completed segments within this specific shape. [0351] 3. Drawing precision performance scores/measures: [0352] (analysis based on best of two attempts[highest number of completed segments] for each shape, if applicable) [0353] a. Deviation (Dev) calculated as the sum of overall area under the curve (AUC) measures of integrated surface deviations between the drawn trajectory and the target drawing path from starting to ending checkpoints that were reached for each specific shapes divided by the total cumulative length of the corresponding target path within these shapes (from starting to ending checkpoints that were reached). [0354] b. Linear deviation (Dev.sub.L) calculated as Dev in #3a but specifically from the linear and square shape testing results. [0355] c. Circular deviation (Dev.sub.C) calculated as Dev in #3a but specifically from the circular and sinusoidal shape testing results. [0356] d. Spiral deviation (Dev.sub.S) calculated as Dev in #3a but specifically from the spiral shape testing results. [0357] e. Shape-specific deviation (Dev.sub.1-6) calculated as Dev in #3a but from each of the 6 distinct shape testing results separately, only applicable for those shapes where at least 3 segments were successfully completed within the best attempt. [0358] f. Continuous variable analysis of any other methods of calculating shape-specific or shape-agnostic overall deviation from the target trajectory. [0359] 4.) Pressure profile measurement [0360] i) Exerted average pressure [0361] ii) Deviation (Dev) calculated as the standard deviation of pressure
[0362] The distal motor function (so-called “squeeze a shape test”) may measure dexterity and distal weakness of the fingers. The dataset acquired from such test allow identifying the precision and speed of finger movements and related pressure profiles. The test may require calibration with respect to the movement precision ability of the subject first.
[0363] The aim of the Squeeze a Shape test is to assess fine distal motor manipulation (gripping & grasping) & control by evaluating accuracy of pinch closed finger movement. The test is considered to cover the following aspects of impaired hand motor function: impaired gripping/grasping function, muscle weakness, and impaired hand-eye coordination. The patients are instructed to hold the mobile device in the untested hand and by touching the screen with two fingers from the same hand (thumb+second or thumb+third finger preferred) to squeeze/pinch as many round shapes (i.e. tomatoes) as they can during 30 seconds. Impaired fine motor manipulation will affect the performance. Test will be alternatingly performed with right and left hand. User will be instructed on daily alternation.
[0364] Typical Squeeze a Shape test performance parameters of interest: [0365] 1. Number of squeezed shapes [0366] a. Total number of tomato shapes squeezed in 30 seconds (ΣSh) [0367] b. Total number of tomatoes squeezed at first attempt (ΣSh.sub.1) in 30 seconds (a first attempt is detected as the first double contact on screen following a successful squeezing if not the very first attempt of the test) [0368] 2. Pinching precision measures: [0369] a. Pinching success rate (PsR) defined as ΣSh divided by the total number of pinching (ΣP) attempts (measured as the total number of separately detected double finger contacts on screen) within the total duration of the test. [0370] b. Double touching asynchrony (DTA) measured as the lag time between first and second fingers touch the screen for all double contacts detected. [0371] c. Pinching target precision (P.sub.TP) measured as the distance from equidistant point between the starting touch points of the two fingers at double contact to the centre of the tomato shape, for all double contacts detected. [0372] d. Pinching finger movement asymmetry (P.sub.FMA) measured as the ratio between respective distances slid by the two fingers (shortest/longest) from the double contact starting points until reaching pinch gap, for all double contacts successfully pinching. [0373] e. Pinching finger velocity (P.sub.FS) measured as the speed (mm/sec) of each one and/or both fingers sliding on the screen from time of double contact until reaching pinch gap, for all double contacts successfully pinching. [0374] f. Pinching finger asynchrony (PFA) measured as the ratio between velocities of respective individual fingers sliding on the screen (slowest/fastest) from the time of double contact until reaching pinch gap, for all double contacts successfully pinching. [0375] g. Continuous variable analysis of 2a to 2f over time as well as their analysis by epochs of variable duration (5-15 seconds) [0376] h. Continuous variable analysis of integrated measures of deviation from target drawn trajectory for all tested shapes (in particular the spiral and square) [0377] 3.) Pressure profile measurement [0378] i) Exerted average pressure [0379] ii) Deviation (Dev) calculated as the standard deviation of pressure
[0380] More typically, the Squeeze a Shape test and the Draw a Shape test are performed in accordance with the method of the present invention. Even more specifically, the performance parameters listed in the Table 1 below are determined.
[0381] The data acquisition device may be further adapted for performing or acquiring a data from a further test for central motor function (so-called “voice test”) configured to measure proximal central motoric functions by measuring voicing capabilities.
[0382] (2) Cheer-the-Monster Test, Voice Test:
[0383] The term “Cheer-the-Monster test”, as used herein, relates to a test for sustained phonation, which is, in an embodiment, a surrogate test for respiratory function assessments to address abdominal and thoracic impairments, in an embodiment including voice pitch variation as an indicator of muscular fatigue, central hypotonia and/or ventilation problems. In an embodiment, Cheer-the-Monster measures the participant's ability to sustain a controlled vocalization of an “aaah” sound. The test uses an appropriate sensor to capture the participant's phonation, in an embodiment a voice recorder, such as a microphone.
[0384] In an embodiment, the task to be performed by the subject is as follows: Cheer the Monster requires the participant to control the speed at which the monster runs towards his goal. The monster is trying to run as far as possible in 30 seconds. Subjects are asked to make as loud an “aaah” sound as they can, for as long as possible. The volume of the sound is determined and used to modulate the character's running speed. The game duration is 30 seconds so multiple “aaah” sounds may be used to complete the game if necessary.
[0385] (3) Tap-the-Monster Test:
[0386] The term “Tap the Monster test”, as used herein, relates to a test designed for the assessment of distal motor function in accordance with MFM D3 (Bérard C et al. (2005), Neuromuscular Disorders 15:463). In an embodiment, the tests are specifically anchored to MFM tests 17 (pick up ten coins), 18 (go around the edge of a CD with a finger), 19 (pick up a pencil and draw loops) and 22 (place finger on the drawings), which evaluate dexterity, distal weakness/strength, and power. The game measures the participant's dexterity and movement speed. In an embodiment, the task to be performed by the subject is as follows: Subject taps to on monsters appearing randomly at 7 different screen positions.
[0387]
TABLE-US-00003 Performance parameter test description rank log10 SPIRAL_sp_cov Draw-A- The coefficient of variation 1 Shape in the drawing velocity of the Spiral shape SPIRAL_hausD Draw-A- The maximum hausdorff 2 Shape distance between drawn and reference shape - as a proxy for maximumm drawing error for the Spiral shape log10 SQUARE_acc_celerity Draw-A- The number of way- 3 Shape points hit (accuracy) divided by the time take to complete the Square shape sigmoid SQUARE_Mag_areaError Draw-A- 4 Shape
[0388]
LIST OF REFERENCE NUMBERS
[0389] 110 machine learning system [0390] 112 processing unit [0391] 114 communication interface [0392] 116 model unit [0393] 118 output interface [0394] 120 step a) [0395] 122 pre-processing [0396] 124 step b) [0397] 126 transformation and feature extraction [0398] 128 step c) [0399] 130 step d) [0400] 132 step e)