Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2023 Feb:7:e2200606.
doi: 10.1200/PO.22.00606.

Validation of Predictive Analyses for Interim Decisions in Clinical Trials

Affiliations
Review

Validation of Predictive Analyses for Interim Decisions in Clinical Trials

Alejandra Avalos-Pacheco et al. JCO Precis Oncol. 2023 Feb.

Abstract

Purpose: Adaptive clinical trials use algorithms to predict, during the study, patient outcomes and final study results. These predictions trigger interim decisions, such as early discontinuation of the trial, and can change the course of the study. Poor selection of the Prediction Analyses and Interim Decisions (PAID) plan in an adaptive clinical trial can have negative consequences, including the risk of exposing patients to ineffective or toxic treatments.

Methods: We present an approach that leverages data sets from completed trials to evaluate and compare candidate PAIDs using interpretable validation metrics. The goal is to determine whether and how to incorporate predictions into major interim decisions in a clinical trial. Candidate PAIDs can differ in several aspects, such as the prediction models used, timing of interim analyses, and potential use of external data sets. To illustrate our approach, we considered a randomized clinical trial in glioblastoma. The study design includes interim futility analyses on the basis of the predictive probability that the final analysis, at the completion of the study, will provide significant evidence of treatment effects. We examined various PAIDs with different levels of complexity to investigate if the use of biomarkers, external data, or novel algorithms improved interim decisions in the glioblastoma clinical trial.

Results: Validation analyses on the basis of completed trials and electronic health records support the selection of algorithms, predictive models, and other aspects of PAIDs for use in adaptive clinical trials. By contrast, PAID evaluations on the basis of arbitrarily defined ad hoc simulation scenarios, which are not tailored to previous clinical data and experience, tend to overvalue complex prediction procedures and produce poor estimates of trial operating characteristics such as power and the number of enrolled patients.

Conclusion: Validation analyses on the basis of completed trials and real world data support the selection of predictive models, interim analysis rules, and other aspects of PAIDs in future clinical trials.

PubMed Disclaimer

Conflict of interest statement

The following represents disclosure information provided by authors of this manuscript. All relationships are considered compensated unless otherwise noted. Relationships are self-held unless noted. I = Immediate Family Member, Inst = My Institution. Relationships may not relate to the subject matter of this manuscript. For more information about ASCO's conflict of interest policy, please refer to www.asco.org/rwc or ascopubs.org/po/author-center.

Open Payments is a public database containing information reported by companies about payments made to US-licensed physicians (Open Payments).

Figures

FIG 1.
FIG 1.
Model-based evaluations. Graphical summaries for the predictive accuracy of five candidate PAIDs using model-based simulations, scenario 1 in Table 3. (A) ROC curves, (B) calibration curves, (C) frequency plots, and (D) the probability of early stopping (cf. expression 1, which includes the threshold b) when TE = 0 for PAIDs with an IA conducted after the first 33 outcomes have been observed. (E) and (F) report Brier scores and the probability of early stopping for trials with TE > 0 for PAIDs with the IA conducted after the first 33, 66, 83, or 100 outcomes (threshold b = 0.15). We compare PAIDs that use BB, LR, BART, LR-EC, and BART-EC (see Methods section). BART, Bayesian Additive Regression Trees; BART-EC, BART leveraging RCT and external control data; BB, beta-binomial; IA, interim analysis; LR, logistic regression; LR-EC, logistic regression leveraging RCT and external control data; PAIDs, Prediction Analyses and Interim Decisions; ROC, receiver operating characteristic; TE, treatment effect.
FIG 2.
FIG 2.
Model-free evaluations. We used individual patient-level data from the data set in the study by Chinot et al to generate in silico RCTs with the resampling algorithm (see Methods Section). (A) ROC curves, (B) calibration curves, (C) frequency plots, and (D) the probability of early stopping (cf. expression 1, which includes the threshold b) when TE = 0 for PAIDs with an IA conducted after the first 26 outcomes become available. (E) Brier scores and (F) the probability of early stopping (threshold b = 0.15) for trials with TE > 0. In (E) and (F), the PAIDs include an IA after 26, 52, or 78 outcomes become available from the trial. We compare PAIDs that use BB, LR, BART, LR-EC, and BART-EC methodologies (see Methods Section). BART, Bayesian Additive Regression Trees; BART-EC, BART leveraging RCT and external control data; BB, beta-binomial; IA, interim analysis; LR, logistic regression; LR-EC, logistic regression leveraging RCT and external control data; PAIDs, Prediction Analyses and Interim Decisions; ROC, receiver operating characteristic; TE, treatment effect.

Similar articles

Cited by

References

    1. Proschan MA. Statistical Monitoring of Clinical Trials: A Unified Approach. New York: Springer; 2006.
    1. Berry SM, Carlin BP, Lee JJ, et al. Bayesian Adaptive Methods for Clinical Trials. CRC Press; 2010.
    1. Betensky RA. Alternative derivations of a rule for early stopping in favor of H0. Am Stat. 2000;54:35–39.
    1. Mehta C, Liu L. An objective re-evaluation of adaptive sample size re-estimation: Commentary on ‘Twenty-five years of confirmatory adaptive designs’. Stat Med. 2016;35:350–358. - PubMed
    1. Thall PF, Wathen JK. Practical Bayesian adaptive randomisation in clinical trials. Eur J Cancer. 2007;43:859–866. - PMC - PubMed

Publication types