Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016 Jun 22;11(6):e0156622.
doi: 10.1371/journal.pone.0156622. eCollection 2016.

Using Clinical Trial Simulators to Analyse the Sources of Variance in Clinical Trials of Novel Therapies for Acute Viral Infections

Affiliations

Using Clinical Trial Simulators to Analyse the Sources of Variance in Clinical Trials of Novel Therapies for Acute Viral Infections

Carolin Vegvari et al. PLoS One. .

Abstract

Background: About 90% of drugs fail in clinical development. The question is whether trials fail because of insufficient efficacy of the new treatment, or rather because of poor trial design that is unable to detect the true efficacy. The variance of the measured endpoints is a major, largely underestimated source of uncertainty in clinical trial design, particularly in acute viral infections. We use a clinical trial simulator to demonstrate how a thorough consideration of the variability inherent in clinical trials of novel therapies for acute viral infections can improve trial design.

Methods and findings: We developed a clinical trial simulator to analyse the impact of three different types of variation on the outcome of a challenge study of influenza treatments for infected patients, including individual patient variability in the response to the drug, the variance of the measurement procedure, and the variance of the lower limit of quantification of endpoint measurements. In addition, we investigated the impact of protocol variation on clinical trial outcome. We found that the greatest source of variance was inter-individual variability in the natural course of infection. Running a larger phase II study can save up to $38 million, if an unlikely to succeed phase III trial is avoided. In addition, low-sensitivity viral load assays can lead to falsely negative trial outcomes.

Conclusions: Due to high inter-individual variability in natural infection, the most important variable in clinical trial design for challenge studies of potential novel influenza treatments is the number of participants. 100 participants are preferable over 50. Using more sensitive viral load assays increases the probability of a positive trial outcome, but may in some circumstances lead to false positive outcomes. Clinical trial simulations are powerful tools to identify the most important sources of variance in clinical trials and thereby help improve trial design.

PubMed Disclaimer

Conflict of interest statement

Competing Interests: The authors have the following interests: The study was sponsored by the Janssen Prevention Center. RMA is a non-executive board member of Glaxo Smith Kline. FDW is employed by the Janssen Prevention Center and Imperial College London. GJW is an employee of the Janssen Prevention Center. There are no patents, products in development or marketed products to declare. This does not alter our adherence to all the PLOS ONE policies on sharing data and materials.

Figures

Fig 1
Fig 1. Viral load curves over time of 50 (a) and 100 (b) patients.
If there are less curves than simulated patients, the infection did not take off in the remaining number of patients (R0 < 1) (curve is a flat line across the x-axis). x-axis: time in days, y-axis: viral load in particles per ml.
Fig 2
Fig 2. Variance of viral load measurements.
For this figure we simulated influenza virus infection in 1000 patients that all had the same natural course of infection. For each of the patients, viral load measurements were generated, simulating qPCR (upper row) and TCID50 assays (lower row). The variance of the TCID50 assay (lognormal distribution) is greater than that of the qPCR assay (Poisson distribution). The variance of the viral load assays is small compared to the variation in natural infection among patients. Upper row: qPCR measurements, unit: viral cDNA/ml. Lower row: TCID50 measurements, unit TCID50/ml. a, c: True (simulated) viral load curves of 1000 patients with same course of natural infection. b, d: simulated measurements of 1000 patients with the same natural course of infection. b: lower limit of quantification of the PCR assay was 3.33 log10 cDNA/ml (2137 cDNA/ml), d: lower limit of quantification of TCID50 assay was 2 log10 TCID50/ml (100 TCID50/ml).
Fig 3
Fig 3. Experiment 1: Individual variability in natural infection.
The plots show the number of successful trials out of 100 simulated trial runs (y-axis) for different assumed efficacies of the treatment (x-axis). All 100 iterations for each assumed efficacy value had exactly the same setup and differed only in the random number seed. The number of successful trials out of 100 runs can be interpreted as the power of the trial. The power of the trial depends on the mechanism of action of the treatment and the number of patients in the trial. As individual variability in natural infection is large, trials with 50 patients do not reach a power of 80%, even if the assumed efficacy of the treatment is high (90+%). Coloured lines show the number of successful trials out of 100 runs depending on efficacy for different endpoint measurements (PCR: viral load AUC measured with qPCR; Symptom: temperature AUC; TCID: viral load AUC measured with TCID50; Simulated: viral load AUC of the simulated viral load AUC). Upper row: trials with 50 patients. Lower row: trials with 100 patients. a, e: treatment acts on all stages of the virus life cycle/model parameters. b, f: treatment acts on the infection rate. c, g: treatment acts on the virus production rate. d, h: treatment acts on the virus clearance rate.
Fig 4
Fig 4. Experiment 5: Sensitivity of viral load assay (qPCR).
Plots show the number of successful trials out of 100 runs (y-axis) over the assumed mean efficacy of treatment (x-axis). The probability of success corresponds to the power of the trial. The parameters determining the course of natural infection were drawn from the same random number distributions for each patient as explained in the main text. The efficacy for each patient (response) was fixed to the same value for each patient in each run. Thin coloured lines show the power of the trial dependent on the efficacy of the treatment for qPCR viral load assays assuming different lower limits of quantification. The bold red line shows the power of the trial dependent on the efficacy of the treatment, if the simulated viral load curve is considered. Very insensitive assays can greatly reduce the power of a trial, especially in potent drugs that act on several stages of the virus life cycle (a). Conversely, if the treatment acts on the infection rate (b) or the virus production rate (c), very sensitive viral load assays tend to give false positive results. Trials with 100 patients. a: treatment acts on all model parameters. b: treatment acts on the infection rate. c: treatment acts on the virus production rate. d: treatment acts on the virus clearance rate.
Fig 5
Fig 5. Experiment 6: Day of Treatment. Treatment acts on all model parameters.
Plots show the number of successful trials out of 100 runs (y-axis) over the assumed mean efficacy of treatment (x-axis). The probability of success corresponds to the power of the trial. The parameters determining the course of natural infection were drawn from the same random number distributions for each patient as explained in the main text. The efficacy for each patient (response) was fixed to the same value for each patient in each run. The later treatment is given, the lower the power of the trial. a: treatment on day 1; b: treatment on day 2; c: treatment on day 3. Coloured lines show power of the trial depending on efficacy for different endpoint measurements (PCR: viral load AUC measured with qPCR; Symptom: temperature AUC; TCID: viral load AUC measured with TCID50; True: viral load AUC of the simulated viral load AUC).

References

    1. Mullard A. New drugs cost US$2.6 billion to develop. Nature Reviews Drug Discovery. 2014;13(12):877-.
    1. TCSDD TCftSoDD. Cost to Develop and Win Marketing Approval for a New Drug Is $2.6 Billion. 2014.
    1. Hay M, Thomas DW, Craighead JL, Economides C, Rosenthal J. Clinical development success rates for investigational drugs. Nature Biotechnology. 2014;32(1):40–51. 10.1038/nbt.2786 - DOI - PubMed
    1. Scannell JW, Blanckley A, Boldon H, Warrington B. Diagnosing the decline in pharmaceutical R&D efficiency. Nat Rev Drug Discov. 2012;11(3):191–200. 10.1038/nrd3681 . - DOI - PubMed
    1. Paul SM, Mytelka DS, Dunwiddie CT, Persinger CC, Munos BH, Lindborg SR, et al. How to improve R&D productivity: the pharmaceutical industry's grand challenge. Nat Rev Drug Discov. 2010;9(3):203–14. 10.1038/nrd3078 . - DOI - PubMed