Abstract
Objective
This article provides the test–retest reliability and Reliable Change Indices (RCIs) of the Philips IntelliSpace Cognition (ISC) platform, which contains digitized versions of well-established neuropsychological tests.
Method
147 participants (ages 19 to 88) completed a digital cognitive test battery on the ISC platform or paper-pencil versions of the same test battery during two separate visits. Intraclass correlation coefficients (ICC) were calculated separately for the ISC and analog test versions to compare reliabilities between administration modalities. RCIs were calculated for the digital tests using the practice-adjusted RCI and standardized regression-based (SRB) method.
Results
Test–retest reliabilities for the ISC tests ranged from moderate to excellent and were comparable to the test-retest reliabilities for the paper-pencil tests. Baseline test performance, retest interval, age, and education predicted test performance at visit 2 with baseline test performance being the strongest predictor for all outcome measures. For most outcome measures, both methods for the calculation of RCIs show agreement on whether or not a reliable change was observed.
Conclusions
RCIs for the digital tests enable clinicians to determine whether a measured change between assessments is due to real improvement or decline. Together, this contributes to the growing evidence for the clinical utility of the ISC platform.
This article provides the test–retest reliability and Reliable Change Indices (RCIs) of the Philips IntelliSpace Cognition (ISC) platform, which contains digitized versions of well-established neuropsychological tests.
Method
147 participants (ages 19 to 88) completed a digital cognitive test battery on the ISC platform or paper-pencil versions of the same test battery during two separate visits. Intraclass correlation coefficients (ICC) were calculated separately for the ISC and analog test versions to compare reliabilities between administration modalities. RCIs were calculated for the digital tests using the practice-adjusted RCI and standardized regression-based (SRB) method.
Results
Test–retest reliabilities for the ISC tests ranged from moderate to excellent and were comparable to the test-retest reliabilities for the paper-pencil tests. Baseline test performance, retest interval, age, and education predicted test performance at visit 2 with baseline test performance being the strongest predictor for all outcome measures. For most outcome measures, both methods for the calculation of RCIs show agreement on whether or not a reliable change was observed.
Conclusions
RCIs for the digital tests enable clinicians to determine whether a measured change between assessments is due to real improvement or decline. Together, this contributes to the growing evidence for the clinical utility of the ISC platform.
Original language | English |
---|---|
Pages (from-to) | 1707-1725 |
Number of pages | 19 |
Journal | The Clinical Neuropsychologist |
Volume | 38 |
Issue number | 7 |
Early online date | 15 Feb 2024 |
DOIs | |
Publication status | Published - 2024 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2024 Informa UK Limited, trading as Taylor & Francis Group.
Funding
The author(s) reported there is no funding associated with the work featured in this article. Laura Klaming, Mandy Spaltman, Stefan Vermeent, Gijs van Elswijk, and Ben Schmand are or have been employed by Philips. Justin B. Miller received consultation fees from Philips. This work was not supported by any grants. The authors would like to thank the Digital Cognitive Diagnostics team at Philips Healthcare for their invaluable work in the development of the ISC tests as well as their practical support during data collection.
Keywords
- cognitive tests
- digital technology
- intraclass correlation
- Neuropsychology
- practice effects
- reliable change index
- test-retest reliability