Insights

18 September 2018

How can digital health improve the signal-to-noise ratio in your clinical trials?

The contemporary framework for designing clinical trials is to build a comprehensive cognitive profile of the patient population from thorough but infrequent assessments. However, this framework struggles to capture the daily fluctuations in mood and cognition that many individuals with psychiatric disorders experience. Here we will discuss how the advent of digital health offers the opportunity to capture a more holistic representation of patients’ cognitive function from high-frequency assessments.

What does the signal-to-noise ratio mean for clinical trials?

Within the context of a clinical trial, a ‘signal’ is the outcome that the investigators are trying to measure, which is typically the patients’ drug-response. Whereas ‘noise’ refers to factors which distract from, or degrade the value of, the outcome measure of interest. An example of ‘noise’ would be a patient having an unusually bad night’s sleep the night before a cognitive assessment, which could hinder their performance and thereby confound the measurable efficacy of a drug designed to demonstrate a pro-cognitive effect.

So, for clinical trials, the signal-to-noise ratio refers to the sensitivity of a measure to detect biological and cognitive changes due to the therapeutic intervention (signal, i.e. increased performance due to drug) compared to the confounding effects of external biases that can influence this measure in ways that are not specific to the therapeutic manipulation (noise, i.e. poor performance due to lack of sleep). Understandably, pharmaceutical companies are very interested in improving the signal-to-noise ratio in their clinical trials so they can determine the true efficacy of the drugs they are developing.

Here we will discuss why typical clinical trial designs in psychiatry might be struggling with the signal-to-noise ratio in particular clinical populations, and conclude with how digital health can alleviate these problems.

The difficulty of optimizing signal-to-noise using conventional clinical trial designs

The typical framework for a clinical trial design often implements a testing regime in which drug efficacy can only be monitored in a relatively infrequent manner, often on the order of every few months. This is primarily due to the enormous cost and time investment, for both clinician and patient, necessary to administer a comprehensive cognitive testing battery within a clinic. However, this particular study design does not allow for an accurate characterization of the continually fluctuating nature of behavioural symptoms, which in many clinical groups can be quite severe, such as in Lewy Body Dementia (1,2) and Schizophrenia (3).

Mental abilities continually fluctuate from day-to-day, and even over the course of a single day, often making an accurate clinical assessment of a patient’s symptoms difficult with a standard trial design. For psychiatric disorders, abnormal circadian rhythms have been related to these continually fluctuating levels of cognitive impairment (3,4). This is especially true for clinical populations with relatively heterogeneous cognitive profiles, such as schizophrenia and depression (3,4,5), however this issue is often over-looked during diagnostic assessment.

This caveat is even more problematic when implementing a drug trial which with a clinical population whose cognitive end-points can substantially vary with the patients’ aberrant circadian rhythms. One approach that has been recently implemented in drug trials to help account for this variance is to consistently test patients at the same time of day throughout the study. This can be a relatively effective way to control for the daily fluctuations of symptoms in clinical populations with fairly predictable circadian cycles, such as Alzheimer’s Disease (5). However, many patients’ who have psychiatric disorders with abnormal sleep/wake cycles (3,4) will, by definition, not display a consistent pattern of fluctuating symptoms over the course of a day and therefore such testing considerations will be of little help to improve signal-to-noise in these populations.

What are the implications of a poor signal-to-noise ratio for clinical trial success?

Using low-frequency clinical assessments with patient populations who have continual fluctuations in cognition can generate the following barriers to trial success:

  • Difficulties in assessing diagnostic comorbidity
  • Inflated baseline scores
  • Minimised sensitivity to detecting positive drug effect

What are the advantages of conducting high-frequency clinical assessments?

Daily, brief assessments can provide a sensitive characterization of an individual’s behavioural fluctuations and thus improve the efficacy of symptom classification and establish a more robust baseline.

This method of ‘burst testing’ allows researchers to aggregate behavioural data across multiple time points, spanning over several days or weeks. The result is a more substantial baseline measure that incorporates the variance associated with fluctuating levels of cognitive impairment. Furthermore, recent studies have found that the degree of variance related to fluctuations in cognitive performance can also be a sensitive measure within itself to characterize disease progression (1) and sensitivity to pharmacological interventions (2) within certain clinical populations.

Another benefit to this approach is that high-frequency assessments create a comprehensive time-line over the course of a study to clearly demonstrate the temporal relationship of one’s behaviour to pharmacological interventions. In this way, the efficacy of a drug compound is not reliant on a few crucial subsequent times points of data collection after baseline, but instead a rich collection of many longitudinal data points that can better characterize therapeutic interventions.

With this notion in mind, patients can often become burdened by the stress and travel related to repeated clinic visits required for drug trials, and these additional demands can potentially compromise the sensitivity of these measures to detect pro-cognitive effects of a drug (i.e. noise). In this way increased, remote data sampling improves the accuracy of these clinical ratings to characterize a patient’s symptoms (i.e. signal) and thereby reduces the reliance of drug trials on just one or two critical endpoint measures.

In the future, this approach of high frequency testing will significantly reduce the length of clinical trials by using remote testing platforms to establish substantial datasets that can accurately characterize drug changes within a shorter period of time without over burdening the patient.

Summary

Cognition can be highly variable moment-to-moment for people with psychiatric and neurological disorders, therefore the means of assessing cognition in these groups must be equally agile. With technological developments, it is now possible to assess cognition daily, and remotely, without the supervision of a healthcare professional. One key advantage of this method is an improved signal-to-noise ratio, and therefore more sensitive metric of successful therapeutic interventions.

Next we will discuss how to use digital health to measure behaviour more accurately.

Interested in learning how to reduce the signal-noise-ratio in your clinical trials?

References 

  1. https://www.ncbi.nlm.nih.gov/pubmed/11044778
  2. https://www.ncbi.nlm.nih.gov/pubmed/22829268
  3. https://www.ncbi.nlm.nih.gov/pubmed/21263013
  4. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4018537/
  5. https://www.ncbi.nlm.nih.gov/pubmed/18066734
  6. https://www.ncbi.nlm.nih.gov/pubmed/11329390

 

Written by Nathan Cashdollar and Sally Jennings

You may also be interested in:

Author:

Picture of Nathan Cashdollar

Nathan Cashdollar PhD

Director of Digital Neuroscience

Scroll to Top