No CrossRef data available.
Published online by Cambridge University Press: 21 December 2023
Although remote neuropsychological assessments have become increasingly common, current research on the reliability and validity of scores obtained from remote at-home assessments are sparse. No studies have examined remote at-home administration of the National Alzheimer’s Coordinating Center (NACC) Uniform Data Set (UDS) even though this battery is being used to track over 45,000 participants over time. This study aimed to determine whether remote UDS scores can be combined with in-person data by assessing whether rates of score changes over time (i.e., reliability) differed by modality and whether remote and in-person scores converge (i.e., validity).
Data for UDS visits conducted from 09/2005 to 12/2021 from 43 Alzheimer’s Disease Research Centers were examined. We identified 311 participants (254 cognitively unimpaired, 7 impaired - not mild cognitive impairment, 25 mild cognitive impairment, 25 dementia) who completed 2 remote UDS visits 0.868 years apart (SD = 0.200 years). First, initial remote scores were correlated with most recent in-person scores. Second, we examined whether rates of change differed between remote and in-person assessments. Repeated-measure one-way ANOVA were used to compare rates calculated from the same individual from remote versus inperson assessments. We additionally identified a demographically- and visit-number-matched group of 311 participants with in-person UDS visits given that all remote visits occurred after in-person visits; one-way ANOVAs were used to compare remote rates to rates from in-person assessments from the matched in-person group. Finally, accuracy of remote scores were assessed by quantifying the difference between the actual remote scores and predicted scores based on repeated in-person assessments. These residual values were then divided by the maximum score to form error rates.
Remote UDS score on MoCA-blind, Craft story immediate and delayed recall, digits forward, digits backward, phonemic fluency (F, L, F + L), and semantic fluency (animals, vegetables, animals + vegetables) were all highly correlated (all ps < 0.001) with scores obtained from preceding in-person assessments. At the group level, within-subject comparisons between remote and in-person rates of change were not significantly different for 7/11 tests; between-subject comparisons were not significantly different for 10/11 tests. Vegetable fluency had slightly reduced rates of change with remote assessment compared to inperson assessments. Critically, remote scores were consistent with predicted scores based on the trajectory of each subject’s in-person assessments with group mean error rates ranging from 0.7% (Craft Delayed Recall) to 3.9% (Phonemic fluency - F).
Our results demonstrate adequate reliability and convergent validity for remotely administered verbally based tests from the NACC UDS battery. Importantly, our findings provide some support for combining remote and in-person scores for studies that transitioned to remote testing due to COVID-19. However, future research is needed for tests with visual stimuli that assess visual memory, visuospatial function, and aspects of executive function.