Insights

8 April 2021

Validation of a Smartphone-based Digit Symbol Substitution Task in Participants With Major Depression

At the virtual ISCTM - Annual Scientific Meeting, Luke Allen presented data supporting the validity of Smartphone-based DSST for assessments in patient populations such as MDD. Read on for the key findings and the full poster: Validation and Comparability of Smartphone-based Digit Symbol Substitution Task with Written Version.

Background

Digit-symbol substitution test (DSST) is a widely used neuropsychological assessment with demonstrated clinical sensitivity in patients suffering from brain damage, dementia, schizophrenia, and depression, as well as natural aging. The DSST is also an ideal cognitive test for remote deployment (e.g., via smartphone) due to its brevity, clinical utility, and sensitivity to cognitive dysfunction across various key domains.

In collaboration with Adams Clinical, we obtained data from participants with major depressive disorder (MDD) who underwent both digital and written versions of the DSST, offering a unique opportunity to inform comparability and validity of smartphone delivered digital DSST; Cognition Kit DSST.

Methods

Participants aged 18-85 meeting DSM-V criteria for MDD, with a Montgomery-Asberg Depression Rating Scale score of ≥20 were recruited into an open-label treatment study and underwent both smartphone-based and pencil-paper (WAIS III) DSST assessments at baseline (N=89), and a subset of patients were assessed at follow up 28 days afterwards (N=30; only right-handed participants were considered for comparability analyses).

Agreement between task versions at baseline was evaluated using Pearson, intra-class, and concordance correlation coefficients. Bland-Altman and Bablok regression plots were used to visualize agreement, and associations with demographic data such as age and sex were also explored.

Analysis

Good agreement was achieved between written and digital DSST at baseline (ICC=0.69, r=0.70, p=1.389e-14 – Figure 1 Left), and visit 2 (ICC=0.60 , r=0.61, p=0.0003 Figure 1 Right).

Bablok regression analysis (a technique for assessing agreement between two methods) revealed a shift toward more items completed with written DSST (average of 19.88 extra correct items). Adjustment of digital scores by addition of the average score difference improved concordance from CCC=0.25 to CCC=0.7; Figure 1 Middle).

Adjusting for order (version completed first) using linear regression, improved agreement to ICC=0.78 at baseline, and ICC=0.68 at visit 2. Furthermore, test-retest reliability for the digital DSST version (ICC=0.87) was comparable to traditional pencil-paper assessments (ICC=0.84) approximately one month apart (Figure 2).

Conclusions

Highly comparative performance on the digital version of the DSST with that of traditional paper-pencil versions demonstrates the utility and reliability of conducting remote cognitive assessments in clinical populations using smartphone technology.

Future work will focus on participant usability of digital DSST and sensitivity to clinical features and change over time.

View poster

You may also be interested in:

Author:

Luke Allen

Clinical Scientist, Cambridge Cognition

Scroll to Top