The development costs for new drugs are rapidly and constantly increasing. Since most pharmaceutical companies are already pushing timelines as hard as possible, it is not possible to reduce costs by shortening the non-clinical and clinical programmes.
One way to control costs could be to minimise the attrition rate. Since only one in ten compounds entering Phase 1 actually gets marketing approval, it would benefit the industry if bad candidates could be killed at an early stage, and biomarkers could offer a solution.
Biomarker research is driven by the hope that using biomarkers in non-clinical and clinical drug development could facilitate decisions on efficacy and safety, and reduce the time to market.
However, biomarkers are often confused with clinical and surrogate end-points. A biomarker is an objective measurement that can be used to indicate normal biological or pathological processes or a pharmacological response.
A clinical end-point is the measurement of a patient's condition following a therapeutic intervention while a surrogate end-point substitutes this clinical end-point.
Therefore, the target for a biomarker or a set of biomarkers can only be the surrogate end-point. The authorities have been reluctant to accept biomarkers as surrogates for clinical outcome, perhaps because they felt uncomfortable with the methodology and the pharmacodynamic background.
However, biomarkers are included in several governmental initiatives, such as the FDA Critical Path Initiative and the NIH Road Map.
Before a biomarker can be proposed as surrogate for a clinical end-point it has to be qualified for this purpose in an iterative process during the development process. In parallel, the analytical method has to be challenged and tested to ensure that it is fit for use.
Although the method may not be as fully validated as drug bioanalysis during the early stages, it has to be good enough to obtain accurate and reliable results. This can be complicated if a standard assay, such as one developed for human serum, is intended for use in the non-clinical stage with different species and matrices for which this biomarker was not tested. This can lead to difficulties in the interpretation and transformation of results to the clinic. Therefore, it is mandatory that the suitability of the assay is controlled at each step. Only then can the authorities be convinced.
Meanwhile there are some excellent examples where biomarkers have become surrogate end-points, including:
There is no official guidance for biomarkers or PD assays with regard to the validation requirements. This is due to the variety of procedures and differences in use, which make standardising the validation process complicated.
More than ten years after the Crystal City Conference laid the grounds for the Food and Drug Administration guidelines on Biomedical Method Validation, a conference on biomarkers was held in Salt Lake City in 2003, emphasising the necessity of accurate and reliable biomarker assays.
The conference also concluded that validation requirements face the challenge that there is no highly purified and well-defined reference standard available and the biomarker can be used for various purposes or in different therapeutic areas where relevant matrix differences may occur.
Since neither the rules for bioanalytical drug assays nor the performance characteristics in clinical chemistry can be applied, analyticists need to find new ways to demonstrate the validity of biomarker assays. The discussion on how biomarker methods can and should be validated is ongoing and ultimately there will be better consensus on the requirements.
It is vital that this is clarified because the authorities will only accept clinically qualified and methodologically validated biomarkers as surrogate end-points for clinical studies.