ADAPTIVE NEUROLOGICAL TESTING METHOD
20220172821 · 2022-06-02
Inventors
- Helen Christine Shiells (Aberdeen, GB)
- Suzannah Marie Morson (Aberdeen, GB)
- Björn Olaf Schelter (Aberdeen, GB)
- Damon Jude Wischik (Cambridge, GB)
- Claude Michel Wischik (Aberdeen, GB)
Cpc classification
A61B5/4088
HUMAN NECESSITIES
G16H20/70
PHYSICS
A61B5/165
HUMAN NECESSITIES
A61B5/7475
HUMAN NECESSITIES
International classification
G16H20/70
PHYSICS
A61B5/00
HUMAN NECESSITIES
A61B5/16
HUMAN NECESSITIES
Abstract
A method, a computer implemented method, and a device, all for adaptively testing a subjects neurological state. The method comprising the steps of: administering one or more seed questions; obtaining one or more answer(s) to the one or more seed questions; calculating a score value of a latent subject trait from the answers to the one or more seed questions; the method comprising an adaptive test loop, comprising: (a) selecting, based on the score value, one or more further questions from a bank of questions; (b) administering the one or more further questions to the subject; (c) updating the score value based on the answers to the one or more further questions; (d) determining whether a test completion criteria has been met; wherein the method repeats steps (a)-(d) in sequence until the test completion criteria has been met, and provides an output of the test based on the score value which is indicative of the subjects neurological state.
Claims
1. A method of adaptively testing a subject's neurological state, comprising the steps of: administering one or more seed questions; obtaining one or more answer(s) to the one or more seed questions; calculating a score value of a latent subject trait from the answers to the one or more seed questions; the method comprising an adaptive test loop, comprising: (a) selecting, based on the score value, one or more further questions from a bank of questions; (b) administering the one or more further questions to the subject; (c) updating the score value based on the answers to the one or more further questions; and (d) determining whether a test completion criteria has been met; wherein the method repeats steps (a)-(d) in sequence until the test completion criteria has been met, and provides an output of the test based on the score value which is indicative of the subject's neurological state.
2. The method of claim 1, wherein the score value is a latent trait ability score.
3. The method of either claim 1 or claim 2, wherein each of the further questions has an associated difficulty rating and an indication of relevancy to at least one of the one or more latent subject traits.
4. The method of claim 3, wherein the selection of the one or more further questions is further based on one or both of the associated difficulty rating and the indication of relevancy to one or more latent subject traits.
5. The method of any preceding claim, wherein the method calculates a score value for each of a plurality of latent subject traits from the answer(s) to the one or more seed questions, and selecting the further questions based on at least one of these score values.
6. The method of claim 5, including an initial step of determining a weighting value for each of the plurality of latent subject traits.
7. The method of claim 6, wherein selecting one or more further questions is further based on the weighting value for at least one of the plurality of latent subject traits.
8. The method of any preceding claim, wherein the one or more latent subject traits include one or more of: cognitive trait, dementia trait, and depression trait.
9. The method of any preceding claim, wherein the score for the or each latent subject trait is calculated using an expectation maximisation algorithm and/or a generalised linear model.
10. The method of claim 9, wherein the generalised linear model has the form:
Y.sub.i,q˜Bernoulli(α+β.sub.qcog.sub.i+γ.sub.qdem.sub.i+δ.sub.qdep.sub.i) where, Y.sub.i,q is the response given by the subject at a particular visit i, q is the question, α is the intercept, β.sub.q, γ.sub.q, and δ.sub.q are difficulty scores for the questions associated with the cognitive trait, dementia trait, and depression trait respectively, and cog.sub.i, dem.sub.i, and dep.sub.i are the subject ability score associated with the cognitive trait, dementia trait, and depression trait respectively.
11. The method of any preceding claim, wherein the seed questions and/or the further questions are selected from any one or more of the Alzheimer's Disease Assessment Scale-cognitive test, Alzheimer's Disease Cooperative Study—Activities of Daily Living test, Neuropsychiatric Inventory test, Montgomery-Asberg Depression Rating Scale test, and the Mini-Mental State Examination test.
12. The method of any preceding claim, wherein the seed questions and/or the further questions are used to calculate and update a score value for each of a plurality of latent subject traits.
13. The method of any preceding claim, wherein selecting one or more further questions is further based on an information content for each question in the database of questions.
14. The method of claim 12, wherein the information content for each question in the database of questions is calculated for the subject based on the score value of the latent subject trait, and one or more questions from the database of questions with the highest information content is selected.
15. A computer implemented method of adaptively testing a subject's neurological state, comprising the steps of: presenting one or more seed questions on a display; receiving answers to the one or more seed questions via an input device; calculating, by a processor, a score value of a latent subject trait from the received answers to the one or more seed questions; entering an adaptive test loop, comprising the steps of: (a) selecting, based on the score value, one or more further questions from a database of questions; (b) presenting the one or more further questions on the display; (c) updating the score value based on received answers to the one or more further questions; and (d) determining whether a test completion criteria has been met; wherein the computer implemented method repeats steps (a)-(d) in sequence until the test completion criteria has been met, and provides an output of the test based on the score value which is indicative of the subject's neurological state.
16. The computer implemented method of claim 15, wherein the score value is a latent trait ability score.
17. The computer implemented method of either claim 15 or claim 16, wherein each of the further questions has an associated difficulty rating and an indication of relevancy to at least one of the one or more latent subject traits.
18. The computer implemented method of claim 17, wherein selecting the one or more further questions is further based on one or both of the associated difficulty rating and the indication of relevancy to one or more latent subject traits.
19. The computer implemented method of any of claims 15-18, including calculating, by the processor, a score value for each of a plurality of latent subject traits from the answer(s) to the one or more seed questions, and selecting the further questions based on at least one of these score values.
20. The computer implemented method of claim 19, including an initial step of determining a weighting value for each of the three latent subject traits.
21. The computer implemented method of claim 20, wherein selecting one or more further questions is further based on the weighting value for at least one of the plurality of latent subject traits.
22. The computer implemented method of any of claims 15-21, wherein the one or more latent subject traits include one or more of: cognitive trait, dementia trait, and depression trait.
23. The computer implemented method of any of claims 15-22, wherein the score for the or each latent subject trait is calculated using an expectation maximisation algorithm and/or a generalised linear model.
24. The computer complemented method of claim 23, wherein the generalised linear model has the form:
Y.sub.i,q˜Bernoulli(α+β.sub.qcog.sub.i+γ.sub.qdem.sub.i+δ.sub.qdep.sub.i) where, Y.sub.i,q is the response given by the subject at a particular visit i, q is the question, a is the intercept, β.sub.q, γ.sub.q, and γ.sub.q are difficulty scores for the questions associated with the cognitive trait, dementia trait, and depression trait respectively, and cog.sub.i, dem.sub.i, and dep.sub.i are the subject ability score associated with the cognitive trait, dementia trait, and depression trait respectively.
25. The computer implemented method of any of claims 15-24, wherein the seed questions and/or the further questions are selected from any one or more of the Alzheimer's Disease Assessment Scale-cognitive test, Alzheimer's Disease Cooperative Study—Activities of Daily Living test, Neuropsychiatric Inventory test, Montgomery-Asberg Depression Rating Scale test, and the Mini-Mental State Examination test.
26. The computer implemented method of any of claims 15-25, wherein the seed questions and/or the further questions are used to calculate and update a score value for each of a plurality of latent subject traits.
27. The computer implemented method of any of claims 15-26, wherein selecting one or more further questions is further based on an information content for each question in the database of questions.
28. The computer implemented method of claim 27, wherein the information content for each question in the database of questions is calculated for the subject based on the score value of the latent subject trait, and one or more questions from the database of questions with the highest information content is selected.
29. A device for implementing an adaptive test of a subject's neurological state, comprising: a processor, a memory, a display, and an input device; wherein the memory contains processor executable instructions to perform the method of any of claims 15-28.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0046] Embodiments of the invention will now be described by way of example with reference to the accompanying drawings in which:
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
DETAILED DESCRIPTION AND FURTHER OPTIONAL FEATURES
[0060] Aspects and embodiments of the present invention will now be discussed with reference to the accompanying figures. Further aspects and embodiments will be apparent to those skilled in the art. All documents mentioned in this text are incorporated herein by reference.
[0061] Item Response Theory
[0062] A battery of separate neuropsychological tests exists in order to quantify the severity of Dementia of a patient, this can be for the purpose of diagnosis or monitoring of the disease. The tests are delivered in the form of individual items of varying difficulty which can be answered by the patient or carer to give an understanding of the severity of a patient's condition. This is as illustrated in
[0063] An improvement to current scoring methods provided by the present invention is item response theory (IRT). IRT enables the scoring of individual items from tests according to their difficulty, and patients according to their ability, both with respect to some underlying latent trait, here a specific clinical feature of Dementia-related decline. Thereby, the central concept of IRT is the relationship between the way a participant responds to a question and the latent trait of which that question is indicative.
[0064] Item response theory is predicated on three key assumptions. Firstly: that all items in a given test are a measure of strictly one latent trait, known as unidimensionality. Multidimensional IRT is possible where there is assumed to be more than one latent trait, but is computationally complex owing to the increased number of parameters to fit from the data. The dimensionality of the data is, in this case, analogous to the number of latent traits measured by the given items. The number of latent traits is typically assessed using some form of factor analysis such as principal components analysis. Factor analysis generally assumes a normal distribution, which is somewhat unusual to obtain when the data collected is binary in nature; therefore, typically, factor analysis would fail in this special case. The second assumption is that each of the items in a test is locally independent, that is a right or wrong answer in one item should not automatically lead to an identical answer to another question. Items in a test should be correlated only through the latent trait of which they are a measure, if the item is a measure of more than one latent trait (i.e., multidimensional) correlation can legitimately be through multiple latent traits. The third assumption is that a test taker—or participant—answers a question in such a way that is indicative of a latent trait to be measured. More specifically, the probability that a participant will answer a question correctly is a function of their latent ability of a given trait. Therefore, the higher a participant's latent ability of a specific trait, the higher the probability that they will answer a question correctly. The latent trait cannot typically be directly measured. In order to indirectly assess the capability of a participant with respect to this latent trait, items must be used which are an indirect measure. This theory can be applied more generally to measure many different kinds of latent traits, such as cognition and depression. The relationship between the participant's latent trait ability, and their probability to answer the item correctly is typically modelled using the standard logistic function.
[0065] In prior applications of item response theory to neuropsychological test data collected from subjects with Dementia, the test data is assumed to be one-dimensional, meaning that the items included in the analysis are assumed to measure a single latent trait. This latent trait is then assumed to be a proxy for Dementia, or a single aspect of Dementia.
[0066] However, clinical observation has demonstrated that individuals show different impairments at different stages; disease presentation and progression is not uniform, as demonstrated in
[0067] At a basic level, the multidimensional aspect of Alzheimer's disease and Dementia is already acknowledged within the clinical space. This is most clearly demonstrated by the requirement from the US Food and Drug Administration and the European Medicines Agency that any proposed treatment of the disease must show efficacy in cognition and also exhibit clinical meaningfulness, general interpreted as improving functional ability, as co-primary endpoints. Traditionally, separate tests have been used to measure each feature of interest. A measure such as the ADAS-Cog was designed to measure cognition, while a measure such as the ADL was designed to measure function. It is implied that scores on one test (and therefore also one feature) cannot provide any information about the subject's ability on the second test/feature: the interpretation of each test score is restricted to only the single feature it was designed to measure. This also causes the definition of the feature to be restricted by the test used to measure the feature. For example, the MMSE and ADAS-Cog are both considered tests of cognition. However, due to its brief nature, the MMSE provides only a measure of general cognition, while the more specialised ADAS-Cog provides a measure of AD-related cognition.
[0068] The use of IRT allows a subtle but significant shift to take place, removing the focus from the test used to measure a feature to the feature itself. This not only allows the definition of a feature to maintain independence from the test used to measure the feature, but also for the tests themselves to inform multiple different features. The question then arises as to which features or traits are of interest. The traditional dichotomy of cognition and function remains informative, with a slight broadening of the definition of function. The cognitive trait reflects primarily specific aspects of behaviour: the ability to remember a particular word or to copy a particular diagram. The function trait has traditionally been interpreted as activities of daily living, leading to tests designed to measure abilities such as making a meal or travelling independently. However, the presence of neuropsychiatric symptoms is also of diagnostic and clinical importance. Thus, the function trait, consequently referred to as the dementia trait, covers not only tasks of daily living but neuropsychiatric symptoms, bringing it more in line with guidance from the FDA and EMA regarding clinical relevance. The focus on traits also allows a previously undiscussed confound to be considered: depression. In classical test methods, the presence of depression can cause a subject's performance on tests of cognition or function to be lower than their true score. The subject is then perceived to be impaired when in reality the low score could be due to, for example, apathy. As previously stated, to detect the presence of depression in the classical test methodology would require a further test to be administered and, even then, the level of impact of the depression on cognitive ability would not be possible to quantify and so correct for. IRT allows depression to be identified as a separate latent trait without the need for a full test to be administered, and for the confounding effects of depression to be accounted for.
[0069] In applying multidimensional IRT to data from neuropsychological tests which measure many different aspects of Dementia, disparate datasets can be unified into a single analysis which captures disease progression across multiple areas.
[0070] General Mathematical Methods
[0071] The multidimensional IRT algorithm used in aspects of the present invention is realised in the form of an expectation maximisation (EM) algorithm, iteratively applying generalised linear models (GLM) to score questions and subjects. Initial conditions are estimated using summed subjects scores of seed questions assessed by a psychologist to be a good measure of the latent traits underlying subject decline in Dementia, namely cognition, functional impairment (dementia) and depression. The GLM is used to analyse binary subject responses to individual test items, firstly using subject score with respect to each postulated latent trait as covariates, in order to score each of the individual items. Then individual item scores with respect to each postulated latent traits are used to re-score the subjects. The EM algorithm is used to apply these GLMs in an iterative regime in order to locally maximise the likelihood of final test and subject scores.
[0072] As the data to be analysed herein is binary, or binarized, a special case of the binomial distribution—the Bernoulli distribution can be utilised, and therefore a GLM with a logistic regression link function is used (P. McCullagh and J. A. Nelder. Generalized Linear Models. Chapman and Hall., 1952. Ch. 4.3, pp 108). As discussed previously the logistic regression model forms the basis of other IRT models (e.g., the Rasch Model, the 2 and 3 parameter logistic models, as illustrated in B. Wright. A History of Social Science Measurement. Educational Measurement: Issues and Practice, Vol 16(4), 1997), as it restricts probability outcomes to between 0 and 1.
[0073] Given a set of parameters θ, an expectation maximisation (EM) algorithm (for example, as disclosed in A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39, 1977) provides an iterative procedure for arriving at a locally maximised likelihood L(θ), a quantity for which it is not always possible to directly calculate the maximum. In each iteration of the EM algorithm, an update for the unknown parameter θ is calculated, in which L(θ.sub.t+1) is strictly larger than L(θ.sub.t). The expectation (E) step of the algorithm calculates the expected value of the likelihood given the current, conditional parameter estimates of the investigated system θ.sub.t and based on observed aspects of the statistical model, whether this be known parameters or observed data. The maximisation (M) step of the algorithm then maximises the likelihood with respect to the parameters estimated in the E step in order to calculate updated parameters θ.sub.t+1. The EM algorithm then continues to iterate between these two steps until the parameters being estimated converge to some tolerance.
[0074] The EM algorithm can take the form of iterative applications of GLMs to data in order to find the maximum likelihood estimates of hidden parameters. This makes it possible to concurrently estimate two dependent hidden parameters. Initial estimates are made of the first set of hidden parameters, and these initial estimates are used within the explanatory variable of the GLM in order to estimate a second set of hidden parameters. Using this second, estimated, set of hidden parameters as an explanatory variable it is then possible to estimate the first set of hidden parameters, reducing reliance upon the initial estimate. This process is then iterated until the estimates of the first and second sets of hidden parameters both converge independently to some tolerance. Where A and B are the first and second hidden sets of parameters respectively, this follows the algorithm: [0075] 1. A.sub.(0) is initialised with some sensible estimate [0076] 2. B.sub.(t) is then estimated using a GLM with estimates A.sub.(t) included in the explanatory variable. [0077] 3. A.sub.(t+1) is also estimated using a GLM with previous estimates B.sub.(t) included in the explanatory variable. [0078] 4. Steps 2 and 3 are then iterated until the estimates for A.sub.(t) and B.sub.(t) are each converged.
[0079] Multidimensional IRT methods are known per se, which estimate scores for individual items in multiple domains. However, the nature of neuropsychological data collected from patients is such that it is subject to large fluctuations dependent upon the “on-the-day” condition of the patient. These natural fluctuations increase the complexity of the data analysis. A primary concern in analysing the data is to ensure that any algorithm implemented arrives at a result which is mathematically meaningful. That is to say, a global optimum in the likelihood landscape of the multidimensional space is found, and not simply a local optimum. This means that traditional methods of multidimensional item response theory analysis can fail to converge on a global optimum, or result in a complete failure to converge, as is the case with the particular dataset exemplified herein.
[0080] Therefore a modified form of item response theory is provided which enables the pre-specification of initial conditions to aid in guiding the algorithm towards a global optimum solution. This is achieved by use of generalised linear models, which are seeded with pre-specified initial conditions and contained in an expectation maximisation algorithm, scoring items and subjects from a training set of data until convergence, as described previously. Using a priori knowledge of the question set in the form of initial conditions allows the convergence of an algorithm upon a result which can be externally validated and is robust to changes in initial conditions.
[0081] The model to be used to analyse the data is of the form
Y.sub.i,q˜Bernoulli(α+β.sub.qcog.sub.i+γ.sub.qdem.sub.i+δ.sub.qdep.sub.i) Eqn. 4.1
where i is the subject at a specific visit, and q is the question. Therefore, a is the intercept, and β.sub.q, γ.sub.q and δ.sub.q are the cognitive latent trait score, the dementia latent trait score and the depression latent trait score of each question respectively. Furthermore, cog.sub.i, dem.sub.i and dep.sub.i are the cognitive latent trait score, the dementia latent trait score and the depression latent trait score of each subject at their specific visit(s) respectively.
[0082] In order to overcome a reliance of the question scores upon the initial question seedings, an expectation maximisation algorithm was used, as discussed above. This expectation maximisation algorithm worked iteratively to ensure a data driven final result of the difficulty of the items in each latent trait.
[0083] The algorithm was constructed as follows: [0084] 1. Initial scores for the cognitive latent trait, the dementia latent trait, and the depression latent trait (cog.sub.i.sup.0, dem.sub.i.sup.0 and dep.sub.i.sup.0) are calculated for each subject at each of their visits, by counting of the relevant item responses for each category. These item responses are consequently summed and normalised to between 0 and 30. [0085] 2. Fit the generalised linear model using cog.sub.i.sup.t, dem.sub.i.sup.t and dep.sub.i.sup.t as fixed covariates with α=0, to find β.sub.q.sup.t, γ.sub.q.sup.t and δ.sub.q.sup.t. [0086] a. Identify extreme outliers (those outside the interquartile range ±three times the interquartile range) and set these extreme outliers to the maximum/minimum of data within this range. [0087] b. Set question score to 0 in a given latent trait if the standard error overlaps with 0. [0088] c. Take the absolute value of all scores. [0089] 3. Fit the generalised linear model using β.sub.q.sup.t, γ.sub.qt and δ.sub.q.sup.t as fixed covariates, with α=0 to find cog.sub.i.sup.(t+1), dem.sub.i.sup.(t+1)and dep.sub.i.sup.(t+1). [0090] a. Identify extreme outliers (those outside the interquartile range ±three times the interquartile range) and set these extreme outliers to the maximum/minimum of data within this range. [0091] b. Set subject score to 0 in a given latent trait if the standard error overlaps with 0. [0092] c. Take the absolute value of all subject scores. [0093] d. Normalise score to between 0 and 30. [0094] 4. Iterate steps 2 and 3 until convergence.
[0095] As will be appreciated, certain steps of the algorithm discussed above may be omitted or modified whilst still resulting in an algorithm able to achieve the intended result. For example, the sub-steps of identifying the extreme outliers in steps 2 and 3, setting scores to 0, or using a different normalisation range may be omitted, or have their absolute values modified. Similarly the normalisation of the initial patient scores in step 1 may also be omitted or modified. However, the general method set out in steps 1, 2, 3, and 4 should be applied.
[0096] Here the algorithm is specifically described in the framework of application to binary data originating from subjects with Dementia, where the specific dataset analysed herein originates from subjects with mild Alzheimer's disease. However, an algorithm of this type is more generally applicable across different forms of data, with the adaptation of the GLM to a different kind of model to suit, for example, non-binary data.
[0097] Dataset and Pre-Processing
[0098] The data used in the creation of an embodiment of the present invention was collected from patients with diagnosed Dementia and specifically patients diagnosed with Alzheimer's disease. However the principles discussed herein apply equally to data collected from patients with, e.g. behavioural variant frontotemporal dementia. The data originated from clinical trials for a drug to inhibit the aggregation of tau protein in the brain. Pathologies such as Dementia are associated with tangles of tau protein which inhibit cell transport. Clinical trials are conducted in separate phases to test their safety and efficacy. The data analysed herein originates from phase 3 trials, conducted to investigate drug effectiveness. The data was collected from a variety of neuropsychological tests, at various specified time points during subject treatment.
[0099] As examples, the tests analysed in this document are: [0100] 1. Mini-Mental State Examination (MMSE), an assessment of cognition. [0101] 2. Alzheimer's Disease Assessment Scale—cognitive subscale (ADAS-cog), a different assessment of cognition. [0102] 3. Alzheimer's Disease Cooperative Study—Activities of Daily Living Inventory (ADL), an assessment of functionality of an Alzheimer's patient with respect to everyday activities. [0103] 4. Neuropsychiatric Inventory (NPI), an assessment of neuropsychiatric symptoms. [0104] 5. Montgomery-Asberg Depressing Rating Scale (MADRS), an assessment of depression.
[0105] Each of the tests comprise multiple items which, together, aim to give an insight into the severity of the Dementia of a patient. There is discussed herein a scoring system which can describe each of these tests in the same space. That is to say, mathematically describe independent tests and items within those tests according to the latent traits of which they are a measure, in the same n-dimensional space.
[0106] Data collected from the specific neuropsychological tests investigated for the purposes of the discussion herein can take the form of dichotomous or polychotomous data. For the purpose of standardisation all data is processed or cut to ensure that it is binary and that an answer of “1” is indicative of a positive response and an answer of “0” is indicative of a negative response. In order to ensure that all data is binary the divide-by-cut method is employed. Taking an example from the ADL test:
[0107] “Regarding eating: Which best describes the subject's usual performance during the past 4 weeks? [0108] 3—ate without physical help, and used a knife [0109] 2—used a fork or spoon, but not a knife, to eat [0110] 1—used fingers to eat [0111] 0—subject usually or was always fed by someone else”
[0112] Possible answers to this item are 0, 1, 2, and 3. In order to make the item binary, it is split into multiple items each with a binary type response:
[0113] “Regarding eating: Does the subject score higher than 0?
[0114] Regarding eating: Does the subject score higher than 1?
[0115] Regarding eating: Does the subject score higher than 2?”
[0116] This method was applied to all polychotomous questions to ensure that all data collected was binary.
[0117] The second point is to ensure that all positive answers are encoded “1” and all negative answers are encoded “0”. As the neuropsychological tests of interest all pertain to the diagnosis of Dementia, a negative answer is one that is indicative of that disease, for a more general application this can be adapted to generally encoding a “1” as positive, and a “0” as negative. The response to all items was oriented to reflect this. Of course, it will be appreciated that the inverse situation could also be applied i.e. negative answers are encoded “1” and positive answers encoded “0”. In which case, the higher the score the more probable it would be that the subject has Dementia.
[0118] Results
[0119] The methods discussed herein for scoring items according to underlying latent traits was achieved using an expectation maximisation algorithm and generalised linear models as described in the general mathematical methods section.
[0120] Implementing this algorithm yields item scores β.sub.q, γ.sub.q and δ.sub.q at each iteration.
[0121] In
[0122] Robustness Analyses
[0123] Initial Conditions
[0124] The results presented in
[0125] The initial conditions are devised by grouping together questions which could be answered similarly and devising a simple score for each subject at each of their visits from these items. This means that an individual item is not assigned to a latent trait from which it cannot deviate, it is used to contribute to a larger picture of what each latent trait should look like. When too few seed items are used, an insufficient idea of the latent traits is communicated to the algorithm, meaning that the algorithm is prevented from yielding the same results as if more items were included a priori.
[0126] Seeding 34% of the complete set of items and calculating questions scores using the same method yielded a median percentage change of individual item score of 2.9% in the cognitive domain, 2.2% in the dementia domain and 1.8% in the depression domain. Repeating the procedure using 6.8% of the initial seed questions yielded a median percentage change of 6.7% in the cognitive domain, 7.7% in the dementia domain and 6.9% in the depression domain. Relatively small percentage changes in question score occur despite large reduction in the number of initial questions used to calculate initial patient score estimates. These small percentage changes do not affect the interpretation of the items as they relate to the underlying latent traits, as demonstrated in
[0127] Item Difficulty Ordering Individual items are scored in a way which is analogous to their difficulty with respect to the cognitive, dementia, and depression latent trait. Items with a higher score with respect to a latent trait have a lower difficulty with respect to that latent trait.
[0128]
[0129] Justification of 3-Dimensional Model
[0130] Taken together, the results presented provide evidence that the MMSE, ADAS-cog, ADL, NPI and MADRS are answered independently of each other. That is to say, three independent latent traits govern the way in which subjects answer the items from these tests. The independent dimensions are designated as the cognition, dementia and depression latent traits. These latent traits can be distinguished most clearly when seed questions are used to enhance the algorithm with some a priori knowledge of the latent traits to be discovered and an expectation maximisation algorithm is used but are also present in the naïve item response theory. The use of a 3-dimensional model to represent the data is tested by the use of a likelihood ratio test to assess whether a 3-dimensional model gives a statistically significant better fit to the data than a reduced, 2-dimensional model. Sequentially removing latent traits and comparing the resulting 2-dimensional models against the 3-dimensional model, resulted in 80% of subjects benefitting from a 3-dimensional model when the depression dimension is removed. When the dementia latent trait is removed from the model 60% of subjects benefited from the 3-dimensional model, and finally, when the cognitive latent trait is removed 79% of subjects benefited from the 3-dimensional model. Taken together these results show that at least 60% of subjects benefited from the fitting of a 3-dimensional model, justifying the use of at least 3 latent traits to characterise the progression of mild Alzheimer's disease.
[0131] The methods discussed above have shown that by characterising items of a plurality of dementia tests, it is possible to investigate specific latent traits by selection of items (i) the answers of which are indicative to that specific latent trait; and (ii) with an appropriate degree of difficulty given previous answers.
Conclusion
[0132] In conclusion, what has been presented herein is evidence showing that individual items from distinct neuropsychological tests can exist within the same multidimensional space. This space mathematically characterises how well individual items measure underlying latent traits in subjects with mild Alzheimer's Disease. The general algorithm is an application of the expectation maximization to item response theory; realised herein in the form of generalised linear models.
[0133] Computer Adaptive Testing Algorithm Implemented in a Neuropsychological Test
[0134]
[0135] After the subject answers the one or more seed questions, a score for the latent trait being measured is calculated in step 103. In order to calculate the score, an item value lookup function 101 is used to access a store 109 or question ability or difficulty values. The detail of this calculation is discussed below.
[0136] Subsequently, the score is used (in step 104) to select one or more questions to be administered. Again, question/item ability or difficulty values are accessed via the item value look-up function 101. The detail of the selection of one or more subsequent questions is discussed below.
[0137] After selecting the one or more further questions, they are administered in step 105 and the results recorded. These results are used to update the score for the latent trait in step 106. After this, a decision is made as to whether the test completion criteria has been met (see decision box 107). The test completion criteria could be, for example, a minimum confidence value. Alternatively, the test completion criteria could be manually triggered by an administrator of the test, or could be automatically triggered when a predetermined number of questions have been administered.
[0138] If the test completion criteria has not been met, the loop returns to step 104 and repeats the sequence discussed above. This loop from step 104, to 105, then 106, and back to 104 defines the adaptive test loop.
[0139] If the test completion criteria has been met, the loop moves to step 108 and the test ends. Confidence values in the output of the test are provided. The output of the test may be, for example, a dementia rating score or the score value for the latent trait being measured.
[0140]
[0141] Thus, after the test starts in step 101, an N dimension weight preference is chosen for each dimension, either directly or it may be predefined by the test. The weight preference may, in some examples, be a binary weighting i.e. 0 indicating the latent trait is not to be measured, and 1 indicating that it is to be measured. Alternatively, the weight preference may be indicative a preference for the administrator to which latent trait should be measured. For example, a weight preference of 0.5 for dem and 1 for cog may indicate that the administrator of the test requires a score value for the cog latent trait and (if possible) would like a score value for the dem latent trait. The N dimension weighting is sent, via link 212, to a module 210 discussed below.
[0142] After selection of the N dimension weight preferences, the test begins in step 203 where one or more seed questions are administered. Subsequently, in step 204, N dimension scores are calculated. In order to calculate the N dimension scores, an N dimension values per item lookup function 213 operates, and interacts with store 211. Store 211 contains N dimension question ability/difficulty values.
[0143] Having calculated an initial value for each of the N dimensions, the test (in step 205) uses these values to select 1 or more questions. Here, the test utilises the same function 213 used to lookup the N dimension values per question, but the selection is modified or guided based on the previously set N dimension weighting by module 210. For example, if a weight preference of 0 was set for dem, no questions relating to the dementia latent trait will be selected.
[0144] Once the questions have been selected, they are administered in step 206 and the answers recorded. The answers to these questions are used in step 207 to update the N dimension scores. This updating step also uses the function 213 to retrieve the N dimension question ability/difficulty value.
[0145] After the scores are updated, the test determines (see decision box 208) whether the test criteria has been met. The test completion criteria could be, for example, a minimum confidence value. Alternatively, the test completion criteria could be manually triggered by an administrator of the test, or could be automatically triggered when a predetermined number of questions have been administered.
[0146] If the test completion criteria has not been met, the test returns to step 205 and enters the adaptive test loop.
[0147] If the test completion criteria has been met, the test moves to step 209 where the test ends and N dimension scores and confidence values are provided.
[0148] Calculation of Subject Score:
[0149] To calculate the score of a subject, and the corresponding confidence interval given the subject's answers to the administered questions, a maximum likelihood estimator is used. The likelihood estimate of a subject having a given score is calculated, and the subject ability score with the maximum likelihood is chosen. The likelihood estimate, r, of a question is calculated as
r=qx×e
[0150] If the subject answer is incorrect or
r=−qx×e
[0151] Otherwise. Where q and e are both variables associated with the difficulty of a given question, and x is the ability of the subject. The likelihood is calculated at each subject ability x as
L.sub.x=−log(1+e.sup.r)
[0152] And is summed over all questions answered for each x. The subject ability x with the highest L.sub.x is the maximum likelihood estimate of the subject ability. The confidence intervals are calculated based on 1 standard deviation of the maximum likelihood estimate of the subject score, which corresponds to a 68% confidence interval. A 68% chi-squared test has a threshold of 0.9889 and a 2×log-likelihood is mathematically similar to a chi-squared test. Therefore the confidence interval is based on
The range of x in accordance with this range in L.sub.x is the confidence interval.
[0153] The above method can be applied simultaneously to multiple dimensions to calculate subject ability scores on multiple latent traits. Such a calculation can be done using the same method applied to specific trait q.sub.d and e.sub.d in which d is the latent trait/dimension to be measured, and correspondingly the subject ability x.sub.d is specific to the latent trait/dimension to be measured.
[0154] Calculation of Next Question
[0155] In order to choose the next question to be administered which are best suited to the ability range of the subject, the information content, I.sub.qx, of each question at a given ability score x is calculated as
[0156] An iterative procedure then begins to pick the next question or questions. This is achieved by calculating the information content of the current set of proposed next questions. This information content of the set of next questions is calculated by summing the information content I.sub.qx over all questions in the set of proposed next questions (one is added per iteration) for each patient ability within a pertinent range. This pertinent range is calculated to between the lower confidence interval−2, lowerCI−2, and the upper confidence interval+2, upperCI+2.
[0157] Where n is the number of questions in the proposed set of next questions.
[0158] Then the information content for each possible eligible question for the proposed set of next questions is calculated. This is done by summing the information content for each question over all pertinent subject abilities x and weighting by the information content of the current proposed set of next questions at each x
[0159] The questions with the highest information content I.sub.q is then added to the proposed set of next questions and the process's iterative procedure begins again. The procedure continues until a maximum set number of next questions has been chosen (for example, n=1, or n=5), the list of eligible questions is empty, or the highest information content of any questions is 0.
[0160] As previously, the method can be applied to multiple latent traits in the same way by using question scores q.sub.d and q.sub.d, and subject ability score x.sub.d, specific to that latent trait.
Worked Example
[0161] A subject answers example seed questions in such a way (where questions are coded for ease of use): [0162] ‘deps_thoughts_slowed’: ‘Y’, ‘dob_year_correct’: ‘Y’, ‘age_correct’: ‘Y’, ‘mem_pm_prev’: ‘N’, ‘mmse_year’: ‘N’, ‘deps_sad’: ‘N’, ‘dob_date_correct’: ‘Y’, ‘mmse_month’: ‘Y’, ‘deps_diff_concentrate’: ‘N’, ‘mem_pm’: ‘Y’, ‘mmse_season’: ‘Y’, ‘mmse_date’: ‘N’, ‘mmse_city’: ‘Y’, ‘mmse_building’: ‘Y’, ‘deps_diff decisions’: ‘N’, ‘deps_prefer_alone’: ‘N’, ‘dob_month_correct’: ‘Y’, ‘mmse_dayweek’: ‘Y’, ‘deps_lost_energy’: ‘N’, ‘forget_start_date’: ‘24’, ‘deps_more_tense’: ‘N’, ‘forget_where_more’: ‘Y’, ‘prax_coin’: ‘Small’. [0163] Score=17.1, confidence interval=[14.7,19.9] [0164] Next questions: ‘serial_sevens’, ‘mmse_recall3’, ‘mmse_world’, ‘abs_apple_banana’, ‘prax_clock_num’.
[0165] While the invention has been described in conjunction with the exemplary embodiments described above, many equivalent modifications and variations will be apparent to those skilled in the art when given this disclosure. Accordingly, the exemplary embodiments of the invention set forth above are considered to be illustrative and not limiting. Various changes to the described embodiments may be made without departing from the spirit and scope of the invention.