Adapt and thrive – understanding adaptive clinical trials

29 August 2014



Adaptive clinical trials hold great potential for the drug development process, saving time and money while better targeting the patients who might benefit. However, in order to realise these advantages, a sponsor company needs a strong methodology and infrastructure in place. Terry Katz, director of global data management and statistics at Merck Animal Health, explains how a sponsor can usefully harness the interim data at their disposal and make better decisions as a result.


Over the past few years, adaptive trials have attracted a significant industry buzz. With potential to build flexibility into a trial - not to mention saving time and money - adaptive designs have been pegged as a solution to various well-documented issues. However, as a relatively new approach to conducting clinical trials, much remains unexplored and there are obvious difficulties left to surmount.

The concept first arose in 2004, when the FDA introduced its Strategic Path Initiative to smooth the typical drug's passage from lab to market. The aim was to reduce the drop-out at the clinical trials phase, previously known for its high levels of attrition.

A year later, an adaptive design working group further clarified this intention. They defined an adaptive trial as "a clinical study design that uses accumulating data to decide how to modify aspects of the study as it continues, without undermining the validity and integrity of the trial." Importantly, these modifications should not be ad hoc - a response to poor planning - but integral to the design.

Flexible focus

In 2010, the FDA issued draft guidance on adaptive trials, and since then they have become ever more popular. According to a 2013 report by Tufts Center for the Study of Drug Development (CSDD), adaptive designs are being used in about 20% of clinical trials and usage is set to increase further still. The report concluded that adaptive designs can save $100-200 million a year when applied at the portfolio level.

The advantages are obvious. At a time when R&D is rising in cost, and fewer drugs than ever are being introduced into the market, fixed format designs are starting to seem inefficient. In a typical clinical trial design, the primary end point is specified in advance, with success reliant on the accuracy of these assumptions. This leaves little room for uncertainty.

When conducting an adaptive trial, however, investigators can review the interim data as it accrues and change the design accordingly. Such data might relate to dosing, exposure, response modifiers or differential participant response, and researchers can learn and optimise as they proceed. Some trials have added and dropped whole drugs and patient groups as more information is obtained.

This in turn means a smaller cohort of participants is required - and it is easier to zero in on the particular groups that will benefit.

"The beauty of an adaptive trial is that you can run the trials faster, using fewer patients, emphasising scientific outcomes more than just the intended treatment," explains Terry Katz, director of global data management and statistics at Merck Animal Health. "If the product is very ineffective, you can stop the trial early and save the costs; if the product is very effective you can stop it early to get it out to the patients faster. So it works very well for each individual study."

"Some trials have added and dropped whole drugs and patient groups as more information is obtained. This makes it is easier to zero in on the particular groups that will benefit."

Generally, adaptive design methods are more useful in the early, exploratory stages of clinical trials, when researchers are seeking to determine a drug's maximum tolerated dose. Investigators have used the continual reassessment method (CRM) with some success, particularly in cancer and stroke trials. This Bayesian model-based approach has been shown to estimate the dose with greater accuracy.

Alternative methods

Other types of adaptive trial design are less likely to be approved. Adaptive randomisation, for instance, is complex and little-used: the FDA deems it "should be used cautiously in adequate and well-controlled studies, as the analysis is not as easily interpretable as when fixed randomisation probabilities are used."

Likewise, there is a relatively low acceptance of 'enrichment' designs, which means starting with a broad cohort and using appropriate biomarkers to narrow this group down. The concept is promising - in some cases, the sample group required to achieve statistical significance has dropped from around 1,500 to 50 - but in practice, the process is fraught with difficulties.

After all, even fixed format designs pose challenges. The investigator has to take into account factors including randomisation, supply, monitoring and data capture, and these elements become more tasking than ever when you're faced with changing parameters.

Take supplies for instance. In any trial, it's important to have adequate supplies available and packaged to meet your requirements, but in an adaptive trial you have little idea what those requirements will actually be. Regulation is a particular problem - if new dosing combinations are required, they will need to be supplied rapidly, accurately and in line with FDA rules.

Then there's the matter of distribution. You need to ensure accurate dispersion of materials across the various study sites, and careful tracking of the drug throughout the clinical trial life cycle. It's easy to see why careful planning and coordination are so important. In recent years, a wide range of third-party providers have expanded their service offerings to meet these new demands.

Perhaps the biggest hurdle, however, is data management. The data needs to be clean, coherent and instantly usable, with the trial infrastructure capable of withstanding whatever adaptations arise.

"You have to be able to get the data into a form that can be used by the statisticians," says Katz. "In a fixed trial there's always pressure to get data, fast, but in an adaptive trial you can't adapt without the data, so the pressure of having it available is much higher. The other part is 'can the data management systems adapt along with the trial?' You need to be able to design a system up front that matches the adapted protocol. If you have to rebuild your infrastructure every time, poor data management will hold back the trial."

Electrify

Katz is particularly piqued by the prospects afforded by electronic data capture (EDC). Many companies, he remarks, are already making good use of EDC, and implementing electronic diaries and electronic patient-reported outcomes (ePROs), which enable researchers to log information as it becomes available. This in turn allows them to swiftly make the adjustments they need, boosting the chance of a positive result.

At this stage, however, he feels such technologies are being imperfectly used. Ideally speaking, an adaptive design would allow investigators to assess results in real time. Through using electronic wearable devices such as blood pressure monitors, investigators would not need to rely on scheduled doctors' appointments and the like - any changes would be immediately visible as they occurred.

"These tools are not being used as effectively as they could be," he opines. "If information is uploaded in real time, that would change it from a retrospective review of data to a tracking of data. If you're looking for a clinical sign to adapt, you would be able to see it in real time and call the patient in, instead of waiting weeks for a scheduled visit."

Any progress in this regard may require technological advances. Katz says the tools in question are not currently validated to the level where they can be used in clinical trials. Luckily, he sees opportunities on the horizon, particularly in the realm of mobile devices.

"With improved access to dependable IT solutions, there will be scope to run adaptive seamless designs, which combine the exploratory and confirmatory phases into a single adaptive trial."

"If patients are wearing ECG monitors for instance, real-time data will be collected by the cloud and brought over directly into your systems, and sponsors can make decisions much faster than they can today," he says. "They will not need to wait for the investigator to say yes or no to each adaption that occurs - it can become almost automatic by design, which is much more ideal."

Stumbling blocks

Regarding adaptive trials in general, the main obstacles to implementation are logistical and regulatory. Funding mechanisms pose a particular concern - generally these lack the necessary flexibility to account for sample size modifications - and it is clear that improved communication will be necessary if these designs are to garner more widespread acceptance.

There is also work to be done if adaptive designs are to be used in the latter, confirmatory stages of a trial. In this instance, better software and simulations will be required. With improved access to dependable IT solutions, there will be scope to run adaptive seamless designs, which combine the exploratory and confirmatory phases into a single adaptive trial.

For the time being, adaptive designs continue to grow in prominence. They are well suited to small trials, such as for rare diseases, and hold promise for abandoned compounds. They have also made impressive inroads into cancer research.

"If you look at any of the journals, oncology is one of the biggest adopters of adaptive designs - almost every issue has examples of it," says Katz.

"Some companies are running many treatment arms simultaneously, and you've got these interim points where you realise some arms are just not worth continuing and you drop them.

"For individual sponsor companies, adaptive techniques have been very highly used and published, and I've used a few of those in my studies as well."

As the methodological hurdles are cleared away, this level of flexibility presents a tantalising prospect, and we can expect to see a much greater scope for adaptive trials in the years to come.

Terry Katz is head of global data management and statistics for Merck Animal Health. He holds a BSc in microbiology, an MSc in statistics and a post-MSc certificate in management information systems. He is an ASA-accredited professional statistician, an ASQ-certified quality engineer and Six-Sigma green belt.


Privacy Policy
We have updated our privacy policy. In the latest update it explains what cookies are and how we use them on our site. To learn more about cookies and their benefits, please view our privacy policy. Please be aware that parts of this site will not function correctly if you disable cookies. By continuing to use this site, you consent to our use of cookies in accordance with our privacy policy unless you have disabled them.