Augmented with AI31 October 2023
The process of organising and running a clinical trial comes with several challenges, and a significant portion of the cost can be attributed to the operational conduct of the trial. This includes things like patient recruitment, site monitoring, data management and site oversight. There are digital tools to assist with these processes, but many in the industry are touting the benefits of AI to help trials to run smoother. Monica Karpinski speaks to Stefan Harrer, chief innovation officer at Digital Health Cooperative Research Centre and Aldo Faisal, professor of AI and neuroscience at Imperial College London, to delve into the details on the application of AI to optimise clinical trials.
From preclinical development through to phase 3, it costs an estimated $800m–1.4bn to run a clinical trial. If it fails, which 90% of trials do, all of that is a sunk cost. A sizable chunk of that money goes on operations, including patient recruitment and retention, and trial monitoring. According to a 2020 report by Deloitte, recruitment is the main cost driver for trials, accounting for 32% of overall spend. These processes are often time intensive and subject to human error – and their inefficacies can be the difference between a trial succeeding or failing.
In efforts to tighten up performance, many in the industry have turned their attention to AI. Could it help run better trials? Judging by the promise of AI-powered tools that are already available, it certainly seems so. For example, we can now recruit patients with unprecedented speed or predict trial failure before it happens so that there’s enough time to intervene. And, according to some in the field, we’ve barely scratched the surface of which AI is capable.
Recruiting enough participants for a trial is where you’ll sink most of your cost, plus a good amount of your team’s time. Typically, clinicians would need to sort through patient records to find suitable candidates, while patients would scour clinical trial databases and try to make sense of them, says Stefan Harrer. As the chief innovation officer at Digital Health Cooperative Research Centre explains, “Going through this manually is a huge burden. That’s the first thing that AI brings to the table: efficiency.”
With AI, we can automatically identify eligible patients and match them with trials they may benefit from – a technique that’s called clinical trial matching. Two processes are happening here, notes Harrer. On one hand, clinical trial descriptions (which are publicly available) are mined and their eligibility criteria is extracted, while patient information is scanned to see who meets the criteria. Any online information describing their medical history can be considered, from electronic health records to social media profiles. However, you do need to manually assess the quality of the information that’s going into the model to reduce the risk of bias, Harrer adds. Researchers should look for any indications that a patient’s profile has been misrepresented, whether there’s enough data to work with, and if there are any errors in the dataset that need to be fixed.
AI can also reduce the amount of people you need to recruit for your trial. One way to do this is by creating a “digital twin”, an AI model of a patient group that acts as a synthetic control and can account for a portion of the people required. The model is built using data on that cohort, for example, data from previous trials on similar groups or other records you have. “It’s really extracting those features from the population that are relevant to the trial, in terms of criteria and so on, and modelling those,” says Harrer. A predictive model is then used to forecast how the synthetic cohort would react had they been treated with the drug. But digital twins don’t eliminate the need to use real people as well. “You can’t completely substitute the real control arm with synthetic digital twin components, you complement it” adds Harrer, otherwise you wouldn’t have a truly randomised trial.
Collecting better data
To find out how a participant is responding to a treatment during a trial, the methods of choice are still largely to take their word for it or have someone else be present to monitor them. “This is prone to error like you wouldn’t believe,” Harrer stresses. With unreliable data, researchers may not be able to identify whether that intervention is beneficial, meaning the trial is more likely to fail. Instead, what if we could automatically collect data that gives a truer read on how someone is faring? Enter digital disease diaries, cloud-based systems that can collect patient data via wearable sensors or video. The AI system detects symptoms of disease and logs them, without the patient or a supervisor needing to report anything.
“Going through this manually is a huge burden. That’s the first thing that AI brings to the table: efficiency.”
The percentage of overall spending in a clinical trial on patient recruitment
Using wearables, clinicians can identify how a participant is doing in ways that conventional assessments can’t, explains Aldo Faisal, professor of AI and neuroscience at Imperial College London. For example, a standardised test for physical fitness such as the six-minute walk test, measuring how far someone can walk in six minutes, wouldn’t pick up that a patient can turn around in bed again. Faisal is involved in new research using wearable sensors to track the movement of people with Duchenne muscular dystrophy and Friedreich’s ataxia. Both are rare degenerative conditions that hinder movement and can eventually lead to paralysis. Faisal and his team used data collected from participants with such conditions and healthy controls to identify specific movements, which they called “behavioural fingerprints”, that were illustrative of someone’s physical capabilities. For example, how well they could make a circle with their hips, as if using a hula hoop.
“In future, because of the precision of this [AI] system, we’ll also need much fewer patients [in trials].”
The team were able to define behavioural fingerprints that distinguished people with the disease from the controls, and then fed this insight into machine learning algorithms to predict how the conditions would progress up to 12 months later – which they could do with greater accuracy than clinical scales considered the gold standard, says Faisal. “In future, because of the precision of this system, we’ll also need much fewer patients [in trials],” he adds. This could be a boon to trials for rare diseases, where smaller patient populations make recruitment even harder.
The percentage of clinical trials that fail
BIO Industry Analysis
The same type of AI used within digital disease diaries can also predict whether a trial is at risk of failing due to patients backing out. “Filling spots of dropouts is a massive problem and actually one of the major sources of failure of trials,” Harrer says. “It carries exponential costs, even more than if you’re recruiting into the trial in the first place.” Some trial designers report planning for dropout rates as high as 30%. AI can identify behaviour that suggests a patient isn’t following protocol, giving trial staff a chance to intervene and correct the problem. “You want to pick up and maybe even predict these patterns that point to behavioural changes,” explains Harrer. “For example, patient A is always missing that pill on a Tuesday morning, because they have to take their kid to school.” This could be as simple as having a sensor that fires every time a pill case is opened, or having an accelerometer in a smartwatch, he adds. Machine learning algorithms could then dig through and recognise patterns in all that information.
In addition, another use of AI is through chatbots to send prompts to help re-engage patients. For example, if the model knew a patient usually forgot to take their pill every Thursday because they were so exhausted after work, it could send a reminder to go for a walk so they’d feel more energised and be more likely to remember. “AI-powered coaches can interact with patients and caregivers, and raise the alarm, but also come up with intelligent intervention strategies to prevent dropout,” says Harrer. “Because you can’t expect a clinician to call up every patient that’s supposed to take a pill at five o’clock.”
The human touch
But for all the capabilities that AI brings, it’s important to remember that it should be used to complement, rather than replace, human clinical expertise. “Humans must never be taken out of the loop,” says Harrer. “AI must always be used as an assistive tool.” AI lacks empathy and can’t make ethical judgements; it sees patients only as data points. Where AI shines is in helping clinicians and trial staff do their jobs better: retrieving knowledge, collecting new and more precise data, and sorting through information much faster than we can.
Harrer expects that in the coming years, we’ll see more solutions cropping up in relation to digital twins, AI chatbots for patient coaching, and clinical trial matching. For Faisal, having AI in a clinician’s arsenal means they could investigate a wider range of conditions. Currently, there are over 6,000 rare diseases, but most have no treatment because it’s not economically viable to develop them, he stresses. “But if you can take the risk out, and make it faster and cheaper, then we will see a lot more treatments coming up.”