Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2006 Feb 23:7:91.
doi: 10.1186/1471-2105-7-91.

Bias in error estimation when using cross-validation for model selection

Affiliations

Bias in error estimation when using cross-validation for model selection

Sudhir Varma et al. BMC Bioinformatics. .

Abstract

Background: Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data.

Results: We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for "null" and "non-null" data distributions.

Conclusion: We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Distribution of the CV error estimate and the true error for optimized Shrunken Centroids.
Figure 2
Figure 2
Distribution of the CV error estimate and the true error for optimized Support Vector Machine.
Figure 3
Figure 3
Distribution of the nested CV error estimate and true error for optimized Shrunken Centroids nested within a LOOCV loop.
Figure 4
Figure 4
Distribution of the nested CV error estimate and true error for optimized SVM nested within a LOOCV loop.

Similar articles

Cited by

References

    1. Duda RO, Hart PE, Stork DG. Pattern classification. John Wiley and Sons Inc. 2001;Ch.9:483–486.
    1. Simon R, Radmacher MD, Dobbin K, McShane LM. Pitfalls in the use of DNA microarray data for diagnostic and prognostic classification. J Natl Cancer Inst. 2003;95:14–18. - PubMed
    1. Ambroise C, McLachlan GJ. Selection bias in gene extraction on the basis of microarray gene-expression data. PNAS. 99:6562–6566. doi: 10.1073/pnas.102102699. 2002 May 14 . - DOI - PMC - PubMed
    1. Reunanen J. Overfitting in making comparisons between variable selection methods. J Machine Learning Research. 2003;3:1371–1382. doi: 10.1162/153244303322753715. - DOI
    1. Tibshirani R, Hastie T, Narasimhan B, Chu G. Diagnosis of multiple cancer types by shrunken centroids of gene expression. PNAS. 99:6567–6572. doi: 10.1073/pnas.082099299. 2002 May 14. - DOI - PMC - PubMed