Insights

22 February 2023

Assessments of ADAS-Cog administration quality are comparable across expert and non-expert reviewers

This poster was presented at the ISCTM Annual Scientific Meeting in February 2023.

Ecological validity and inter-/intra-rater reliability of clinical assessments depends on consistent rater adherence to the clinical administration guidelines unique to each assessment. For this reason, Quality Assurance (QA) review of clinical assessments is necessary to identify issues in administration. Clinical experts typically provide QA reviews, which is a costly process. Here, we explore whether non-expert reviewers may provide similar ratings of ADAS-Cog administration QA to an expert clinician reviewer.

Fifteen audio recordings of ADAS-Cog assessments were reviewed by an expert reviewer and by three non-expert reviewers. For each assessment, all reviewers filled out a QA rubric developed based on the ADAS-Cog administration manual. Each instruction was given a binary score on each QA issue based on the presence of absence of that issue (e.g. a score of 0 for missing an instruction, a score of 1 for a present instruction). Intraclass correlation (ICC) was used to calculate agreement between the expert reviewer and each non-expert reviewer.

The ICC values had a mean of 0.80 and an SD of 0.03 between expert and non-expert reviewers, representing a good agreement. This supports the feasibility of seeking QA review of cognitive assessments beyond experienced clinicians, thereby reducing cost, reducing turnaround time and remediating administration issues.

Future directions of this work include enhancing data collection to both increase reviewer number and include other assessments.

View the poster

Assessments of ADAS-Cog administration quality are comparable across expert and non-expert reviewers.

You may also be interested in:

Author:

Image of Rachel Kindellan

Rachel Kindellan

Associate Product Manager, Winterlight

Scroll to Top